O R G A N I C / F E R T I L I Z E R: 2011

Dec 28, 2011

listing the group membership of a computer in opsmgr [part 3]

a long time ago, I posted about this stuff...

http://marcusoh.blogspot.com/2010/01/listing-group-membership-of-computer-in.html
http://marcusoh.blogspot.com/2010/01/listing-group-membership-of-computer-in_06.html

both of which were just works in progress... and as it turns out, completely wrong! I ran into this post this morning that simplified what I was doing down to a few lines. here it is:

$group = Get-MonitoringObject | Where { $_.DisplayName -eq "YourGroupName"}
$MonitoringClass = Get-MonitoringClass -Name "Microsoft.Windows.Computer"
$group.GetRelatedMonitoringObjects($MonitoringClass,"Recursive") | Select DisplayName

source: http://michielw.blogspot.com/2010/12/scom-get-nested-group-members.html

Dec 27, 2011

my 15 most popular posts of 2011

in the interest of full disclosure, at some point, blogger or analytics did something... but I wasn't tracking page hits for the majority of the year so it's actually only about the last four months. unfortunately, blogger stats provide limited filtering for dates. I was hoping to have some good information to post something like this. oh well. it'll have to do. :)

  1. understanding the “ad op master is inconsistent” alert
  2. how to retrieve your ip address with powershell...
  3. sccm: content hash fails to match
  4. executing batch files remotely with psexec …
  5. sccm: client stuck downloading package with bit .tmp files in cache directory
  6. misc: netstumbler in vista...
  7. outlook 2010 does not successfully book a resource
  8. sccm: integrating dell warranty data into configmgr
  9. writing event log entries with powershell
  10. using preloadpkgonsite.exe to stage compressed copies to child site distribution points
  11. list domain controllers with powershell
  12. using powershell to replace “find” or “findstr”
  13. how to synchronize sticky notes in windows 7
  14. sccm: custom data discovery records (DDRs) using powershell
  15. preloadpkgonsite generates failed to get specified package in database errors

moving to endpoint protection

made the switch this morning to forefront endpoint protection -- or what will be known as system center endpoint protection. most of it went okay, but there were a couple of mcafee components that made the process PAINFUL! believe it or not, the antivirus component was not it.

the removal of the host intrusion prevention system (hips) and the mcafee agent itself were both more time-consuming than required, each with its own peculiarity. :/

 

removing hips

attempting to remove the hips agent may produce an error about needing to "disable self-protect mode." I am shamelessly stealing this from the site kmit4u because the instructions are quite near perfect and don't need revising:

  1. Click Start, Run, type explorer and click OK.
  2. Navigate to: C:\Program Files\McAfee\Host Intrusion Prevention\
  3. Double-click McAfeeFire.exe.
  4. Click Task, Unlock User Interface.
  5. Type the unlock code, and select Administrator Password.
    NOTE: By default, the unlock code is abcde12345
  6. After the user interface is unlocked, click the IPS Policy tab. 
  7. Deselect Enable Host IPS and Enable Network IPS. (The Firewall Policy can be disabled on its own tab.)
  8. Select Task, Exit.
    Credit: Knowledge Management IT for you: How to disable the Host Intrusion Prevention(IDS) Mcafee disable self-protect mode
    Under Creative Commons License: Attribution

after following those steps, i was able to remove the hips agent!

 

removing the mcafee agent

while attempting to remove the mcafee agent, it kept telling me that "other products are still using it." uh no. I removed all other products! here's how you force it:

  1. open up a cmd prompt.
  2. navigate to the directory where frminst.exe is located. generally this is in the "common framework" directory of mcafee.
  3. run the following:

frminst.exe /forceuninstall

now, smile happily as you have slain the hideous beast!

Nov 23, 2011

sccm: computers with names greater than 15 characters

and coincidentally, blog posts with really long titles.

if you run into scenarios where you find that computers with longer than 15 characters are exhibiting strange issues in an application, you can root out these computers with configmgr or AD. while the computer itself may show a longer than 15 character machine name, the records for it in AD and configmgr show a truncated value.

this is interesting because the computer is registered with its long name in DNS. it can "interesting" when you see a truncated name that doesn't resolve (especially where WINS is involved and handles the resolution for netbios lookup adding further confusion.)

funny enough, where you root out this problem exists kind of on the same plane for both AD and configmgr. it's the DNS host name that gives it away! take a look a these screenshots -- from SCCM and AD, respectively:

image

(removed the computer names out of this screenshot)

image

so what do you do with this information, now that you know? well, let's look at ways of identifying it.

first of all, you can count out the characters to see how many are in the name. since dnsname and dnshostname have the same thing, we could use something like this:

((Get-QADComputer myComputer).dnshostname.split(".")[0]) | Measure-object -Character

Characters
----------
16


now we can loop that into a general search for the entire domain:

Get-QADComputer -SearchRoot 'whatev you use' -SizeLimit 0 | where {($_.dnshostname.split(".")[0] | Measure-Object -Character).characters -gt 15}

if sql is your thing and you prefer to query configmgr, here's the sql query:

select distinct dnshostname0 
from dbo.v_GS_NETWORK_ADAPTER_CONFIGUR
where LEN(dnshostname0) > 15

hope that helps!

Nov 16, 2011

sccm: how old is my data?

here is something I wrote up not too long ago for the benefit of some of my coworkers. it's nothing new and might be what most of you know, but I wanted to post it up just in case. it's a question I'm asked often... so it seemed like something worthy of sharing.

 

How old is my data?

Let me first summarize a few things. SCCM is generally configured to hold a set amount of data per given data type. In some cases, you may find that you have data that is older than the specified age value (90 today, 30 sometime soon). So what’s going on here? The short answer is, the data is still current. There are many things that comprise the composite object record. Let’s take my client for example. Here is a screen shot:

image

As you will notice, Agent Name, Agent Site, and Agent Time all contain numbers in brackets. The [#] is an index of the property, so to speak. For example, if I were interested in Heartbeat Discovery, I would read the values this way:

Agent Name[0] :      Heartbeat Discovery
Agent Site[0] : XYZ
Agent Time[0] : 10/28/2011 7:31:05 AM

As you can see, this discovery record for my client actually contains three distinct discoveries that continue to supply it with fresh discovery information.

 

When will my data age out?

Data ages out of SCCM when all of the discoveries cease to meet the criteria. For example, in the screenshot above, if Heartbeat Discovery no longer provided discovery data, the agent time would continue to get older and older while the other two discoveries would continue to stay up to date. As long as there are up to date discoveries for a composite record, the entire record is considered fresh.

 

Why does this matter?

If you look at the agent names, you’ll notice three. Heartbeat Discovery is the agent itself providing discovery data once a day. As the name implies, it’s a heartbeat basically saying – I AM ALIVE! The other two discoveries are Active Directory based. Normally, you will see that old data persists because of the AD discoveries.

As long as the computer account persists in AD, the AD discovery will pick it up and post the new agent time. As you see, the time value for AD discovery is very close to 12:00:00 AM when most of the AD discoveries are kicked off. The heartbeat data is usually started on machine startup, on the other hand and every 24 hours after that.

clip_image002[4]

Oct 18, 2011

enable verbose logging on a sccm server

lots of posts about how to do this for clients. i found a few places where it indicated how to do this for servers.

(unfortunately, it wasn't on the holy grail of screen real estate for searches -- which is right before I have to scroll, ensuring i will never find it again. :] )

component registry path

  • navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\COMPONENTS
  • under this path, select the component key name you're interested in such as SMS_DISCOVERY_DATA_MANAGER.
  • locate the dword value "Verbose Logging" and set the value to 1.

image

 

tracing registry path (turns on sql tracing)

  • navigate to: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Tracing
  • locate the dword value "SqlEnabled" and set the value to 1.

image

  • under the path in step 1, select the component key name you're interested in such as SMS_DISCOVERY_DATA_MANAGER.
  • ensure the value "Enabled" is set to 1.

image

don't forget to cycle the SMS_EXECUTIVE service.

Sep 26, 2011

how to verify a server is running r2

locating windows server 2003 running r2 is a bit of an obscure value. with windows server 2008, you can simply check the build number. unfortunately, in 2003, it's a little more hidden. here's a couple of ways of getting the information.

 

using configmgr

in configmgr, one place the information is contained is in the view v_gs_operating_system in the name0 column:

select sys.Name0, os.Name0
from v_r_system sys
inner join v_GS_OPERATING_SYSTEM os on os.ResourceID = sys.ResourceID
where os.Name0 like '%2003%R2%'
order by sys.Name0

 

using wmi

in wmi, the information is stored in win32_operatingsystem in the attribute "OtherTypeDescription" as indicated here in this msdn article.

OtherTypeDescription
Data type: string
Access type: Read-only
Additional description for the current operating system version.

Windows Server 2003 R2: Contains the string "R2".
Windows Server 2003, Windows XP, Windows 2000, and Windows NT 4.0: OtherTypeDescription is null.

 

you could use a powershell command like this to check a server:

Get-WmiObject -ComputerName myComputer -Class win32_operatingsystem | fl __server, caption, othertypedescription

resulting in this:

__SERVER             : myComputer
caption : Microsoft(R) Windows(R) Server 2003 Standard x64 Edition
othertypedescription : R2

 

thanks erik and chip. :)

Aug 16, 2011

windows anti-virus exclusion list

if you manage anti-virus in any form, you are already familiar with imageexclusion lists. this wiki article supplies links to just about any microsoft product you can think of and all of the exclusions you may need.

http://social.technet.microsoft.com/wiki/contents/articles/953.aspx

pro tip: if you subscribe to the article, you can get updates whenever more products are added.

Aug 10, 2011

sccm: integrating dell warranty data into configmgr

UPDATE: forgot to add the preparatory step to randomize the datescriptran values.
UPDATE: greg ramsey asked me to post the script so here it is. click HERE. it's called MCO_DellWarranty.zip
UPDATE: somehow i included scott’s file in my zip which caused some confusion. i have since removed it so if you need it, be sure to visit his blog for the freshest version. thanks dionna for pointing it out!

this is a process for joining dell warranty information to your configmgr discovery and inventory database. challenges in the past have been that dell was not very forthcoming with providing an easy to query, easy to access warranty database. often times it had to be delivered to you in a spreadsheet or scraped off a webpage which made for inconsistent data updates and constant script failures.

 

history

in my time as an administrator, there have been many iterations of this that i have used. the first one was a little vbscript i wrote which would take a text dump i would get from dell, read in the values, and insert the records into a table. while this worked, it was most cumbersome getting dell to send me the files on a routine basis.

the second one, a bit more elaborate, was a powershell script i wrote which would go to a webpage with the specified serial number and scrape the information. this process worked relatively well, except that their webpage format changed constantly causing the script to error out more often than not.

 

background

hopefully this is the last iteration. this version is a play on the version created by scott keiffer. he created a module which utilizes a web service that dell set up. (you didn't know there was a web service?) along with the module, scott wrote up a powershell script that retrieves warranty data and inserts it into a sql table.

http://xserv.dell.com/services/assetservice.asmx

there's two ways to get the data included in his solution. one is done client side, the other server side. my preference is the server side method, but i wanted to change things up a bit so that it worked in a manner that i was more comfortable with. in particular, i wanted to implement it so that it would do the following:

  • do not drop the database table on every execution
  • only create the table when it doesn't already exist
  • update records that exist and insert new ones that do not
  • update stale records

almost every change i had to make was inside of the sql query statements so for the most part, it would have been fairly basic for most sql dbas. i, however, am NO dba hence the effort required for me was much greater than i care to admit. :-|

 

script changes

my script alterations were minor. i think you'll agree. here's a short run down with the why:

  • $tablequery (62) - this is a variable in which i'm holding a sql statement that creates the table IF the table DOES NOT already exist. this is handled in a sql if statement. (the original script dropped the table and created a new one each time.)
  • $dellquery (81) - this variable also stores a sql statement. the query looks for any system that is matches a dell profile AND does not have a warranty end date in the dellwarrantyinfo table OR has a datescriptran value older than a randomized number. (the original script simply pulled back any system that matched a dell profile.)
  • check for null value or empty arrays (102) - this is just a slight modification to adjust to the kind of data that is pulled in from the $dellquery change. i didn't correct the error statements yet when no systems are detected for updating. i guess i'm being lazy since it works just fine.
  • update statement instead of insert (114) - i added an if statement here to handle how sql manages the record. if it doesn't exist, insert it. if it does, update it. i figure the tradeoff in not having to gather the warranty data and create a new record each time it runs (in environments with a lot of systems) is fair for having to lookup whether the record exists or not.

 

preparatory step

as you can imagine, all of the datescriptran values would be identical for anything that ran during the first execution. this doesn't bode well for updating a group of systems at a time since they would all evaluate to having old warranty data at some point in the future. so... after your first execution, you will need to run the following sql statement which will randomize the date value to some degree:

UPDATE DellWarrantyInfo
SET [DateScriptRan] = DateAdd(dd, -(ABS(CHECKSUM(NEWID())) % 180 + 23), GetDate())

this makes the values somewhere less than or equal to 180 and greater than or equal to 23. you can move the values around to whatever is appropriate for you.

 

the script

ready to play? i cannot reiterate how IMPORTANT it is to test, test, test in a lab environment. there are too many environmental differences to run things blindly. i don't even know if it runs in my environment correctly. ;)

<#
DELL Warrenty Info - v1.0
Scott Keiffer, 2011 (skeiffer_A_T_cm_D_O_T_utexas_D_O_T_edu)
Marcus C. Oh, 8/10/2011 (marcus.oh_at_g_mail_dot_com)
A few minor changes for the script to work in update mode

License: GPL v2 or later

Notes:
This script uses the get-dellwarranty function and SQL to get and store the warranty information for all dell systems in a specified ConfigMgr site.

Disclaimer:
I am not responsible with any problems this script may cause you.
#>
param ([parameter(Mandatory=$true)][String]$SQLServer, [parameter(Mandatory=$true)][String]$SiteCode)


# SQL function, 1 connection per command, may want to break that up but too lazy.
function Invoke-SqlQuery
{
param(
[Parameter(Mandatory=$true)] [string]$ServerInstance,
[string]$Database,
[Parameter(Mandatory=$true)] [string]$Query,
[Int32]$QueryTimeout=600,
[Int32]$ConnectionTimeout=15
)

try {
$ConnectionString = "Server=$ServerInstance;Database=$Database;Integrated Security=True;Connect Timeout=$ConnectionTimeout"
$conn=new-object System.Data.SqlClient.SQLConnection
$conn.ConnectionString=$ConnectionString
$conn.Open()
$cmd=new-object system.Data.SqlClient.SqlCommand($Query,$conn)
$cmd.CommandTimeout=$QueryTimeout
$ds=New-Object system.Data.DataSet
$da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.fill($ds)
Write-Output ($ds.Tables[0])
}
finally {
$conn.Dispose()
}
}


# ---- Main ----

$ConfigMgrDatabase = "SMS_$SiteCode"
$warrantyDatabase = "DellWarrantyInfo"
$tableName = "DellWarrantyInfo"
$scriptPath=Split-Path -parent $MyInvocation.MyCommand.Definition

#Import warranty function
Import-Module "$scriptPath\DellWarrantyInfoFunction.psm1"

Write-Host "--- Dell Warranty Info SQL population script ---"

# create or recreate main table
Write-Verbose "Recreating main table..."


$TableQuery = @"
IF
Not Exists ( select * from sys.tables where name = 'DellWarrantyInfo' )
BEGIN
create table DellWarrantyInfo ( ResourceID int, ComputerName varchar(40),
DateScriptRan datetime, DaysLeft int, DellIBU varchar(16),
[Description] varchar(40), EndDate datetime, Provider varchar(16),
ServiceTag varchar(16), ShipDate datetime, StartDate datetime,
SystemType varchar(40), WarrantyExtended int);
grant select on DellWarrantyInfo to smsschm_users,webreport_approle
END
"
@

Invoke-SqlQuery -ServerInstance $SQLServer -Database $ConfigMgrDatabase -Query $TableQuery
if(!$?) { Write-Error "There was a problem creating or recreating the main table" }

# get a list of dell computers in the site
Write-Host "Obtaining list of Dell systems..."



$DellQuery = @"
SELECT DISTINCT sys.netbios_name0 as ComputerName,
sys.ResourceID, bios.SerialNumber0 as ServiceTag
FROM v_R_System sys
LEFT OUTER JOIN DellWarrantyInfo as dw on sys.ResourceID = dw.resourceid
INNER JOIN v_GS_PC_BIOS as bios on bios.ResourceID = sys.ResourceID
WHERE bios.Manufacturer0 like '%Dell%'
AND ( dw.EndDate IS NULL
OR dw.EndDate = ''
OR dw.DateScriptRan < DateAdd(dd, -(Round((407-322) * RAND() + 322,0)), GetDate()) )
"
@

$dellSystems = Invoke-SqlQuery -ServerInstance $SQLServer -Database $ConfigMgrDatabase -Query $DellQuery
if(!$? -or !$dellSystems) { Write-Error "There was a problem receiving the list of Dell systems." }

#progressbar variables
$length = $dellSystems.count / 100
if ($length -eq 0) { $length=1/100 }
$count=1

#if array is of length 0 the foreach clause still runs with a null value. If check to fix.
if($dellSystems.count -gt 0 -OR $dellSystems.IsNull("ServiceTag") -eq $false -OR $dellSystems -ne $null)
{
Write-Host "Gathering warranty information..."
foreach ($dellSystem in $dellSystems)
{
#draws the progressbar based on the current count / (length/100)
Write-Progress "Processing..." "$($dellSystem.ComputerName)" -perc ([Int]$count++/$length)
$WarrantyInfo = Get-DellWarranty $dellSystem.ServiceTag -ServiceTagInput
#insert info into database
if ($WarrantyInfo) {
Write-Verbose "Issuing update on $($dellSystem.ComputerName)..."
Invoke-SqlQuery -ServerInstance $SQLServer -Database $ConfigMgrDatabase -Query "

IF
Not Exists (select ResourceID from DellWarrantyInfo
where ResourceID = '$($dellSystem.ResourceID)')
BEGIN
INSERT INTO $tableName VALUES (
'$($dellSystem.ResourceID)',
'$($dellSystem.ComputerName)',
'$(Get-Date)',
'$($WarrantyInfo.DaysLeft)',
'$($WarrantyInfo.Region)',
'$($WarrantyInfo.Description)',
'$($WarrantyInfo.EndDate)',
'$($WarrantyInfo.Provider)',
'$($WarrantyInfo.ServiceTag)',
'$($WarrantyInfo.ShipDate)',
'$($WarrantyInfo.StartDate)',
'$($WarrantyInfo.SystemType)',
'$(if($WarrantyInfo.WarrantyExtended){1}else{0})' )

END
ELSE
UPDATE $tableName
SET [ComputerName] = '$($dellSystem.ComputerName)',
[DateScriptRan] = '$(Get-Date)',
[DaysLeft] = '$($WarrantyInfo.DaysLeft)',
[DellIBU] = '$($WarrantyInfo.Region)',
[Description] = '$($WarrantyInfo.Description)',
[EndDate] = '$($WarrantyInfo.EndDate)',
[Provider] = '$($WarrantyInfo.Provider)',
[ServiceTag] = '$($WarrantyInfo.ServiceTag)',
[ShipDate] = '$($WarrantyInfo.ShipDate)',
[StartDate] = '$($WarrantyInfo.StartDate)',
[SystemType] = '$($WarrantyInfo.SystemType)',
[WarrantyExtended] = '$(if($WarrantyInfo.WarrantyExtended){1}else{0})'
WHERE [ResourceID] = '$($dellSystem.ResourceID)'"

if(!$?) { Write-Error "There was a problem adding $($dellSystem.ComputerName) to the database" }
}
}
}
Write-Host "Script Complete."

 

the reports

thank you anonymous for pointing out that i completely forgot to include how to access the data. the script will create a database table inside of your sccm database. the reports included in the original script (noted under the background section) function exactly as indicated in scott's original post. just import them and use it.

have fun!

Jul 26, 2011

ds: logon request fails when groups > 1024

it's probably not logical for you to do this to yourself and thus there is not much to worry about. however, through a series of nesting groups, you can very well do this without thinking much about it. anyway, by the nature of the fact that I am posting this ... means I ran into it. :(

for clarity, the group limitation is actually 1015 when you factor in well-known SIDs.

 

the error message

this is what you will see when attempting to log in:

The system cannot log you on due to the following error: During a logon attempt, the user’s security context accumulated too many security IDs. Please try again or consult your system administrator.

 

detecting the problem

if you want to see how many groups you (or some other user account) is a member of, use the following kinds of commands (may produce different results*):

powershell

Get-QADUser myuserid | Select-Object -ExpandProperty allmemberof | measure

cmd shell
dsquery user -samid myuserid | dsget user -memberof -expand | find /c /v ""

* when I ran them side by side, I had different counts. using dsquery, it lists all security groups. using get-qaduser, it seems to list only the groups by which you don't already have membership. basically, if someone created a group and added domain users as a nested group, get-qaduser doesn't show it.

 

more information

here are two great articles for this information:

http://support.microsoft.com/kb/906208

http://support.microsoft.com/kb/328889

Jul 6, 2011

getting the first and last day of the month in sql

astronomical clock courtesy of simpologisti was putting this together for a friend of mine so i thought i would post it since it seems like a pretty useful thing to have. should be self-explanatory.

-- first and last for previous month
DECLARE @FirstDayPrev DATETIME
DECLARE @LastDayPrev DATETIME

-- first and last for current month
DECLARE @FirstDayCurr DATETIME
DECLARE @LastDayCurr DATETIME

Set @FirstDayPrev = CONVERT(VARCHAR(25),DATEADD(mm, DATEDIFF(mm,0,getdate())-1, 0),101)
Set @LastDayPrev = CONVERT(VARCHAR(25),DATEADD(mm, DATEDIFF(mm, 0,getdate())+0, -1),101)

Set @FirstDayCurr = CONVERT(VARCHAR(25),DATEADD(mm, DATEDIFF(mm, 0,getdate())+0, 0),101)
Set @LastDayCurr = CONVERT(VARCHAR(25),DATEADD(mm, DATEDIFF(mm, 0,getdate())+1, -1),101)

Select @FirstDayPrev, @LastDayPrev, @FirstDayCurr, @LastDayCurr

Jun 10, 2011

viewing internet headers of emails in outlook 2010

this is so annoyingly obscure that I figured I'd point it out.

in outlook, open the message you're interested in. in the ribbon, locate the "tags" section and click the little arrow.

image

and there you have it...

SNAGHTML24013205

Jun 6, 2011

opalis: operator console installation files

with products like kelverion, you can make short work of installing the operator console. if you have gone through this, you know what an enormous amount of time this process takes.

if, however, you need all of the download files, I captured them and made them available here on my skydrive. the files are located under the OpalisOpConsole folder, split into 6 zip files. for this reason, make sure you are using an unzipping utility capable of piecing the content back together. beware! it's quite large and may take some time to download in its entirety.

Jun 2, 2011

open a command prompt to the directory in explorer

this is a cool trick I picked up from my coworker who picked up from a presentation. basically, you can open a command prompt to the exact folder where your explorer is pointing to. here's how.

navigate to the folder you're interested in.

SNAGHTML7263e2

in the address bar of explorer, type "cmd" and hit enter.

SNAGHTML7d4c3e

this will launch a command prompt directly to the folder. perfect!

SNAGHTML75cb4c

Apr 15, 2011

atlanta smug (atlsmug) coming up 4/22/11

hi all.

in case you're not on the mailing list, I just wanted to let you know that we have another user group meeting coming up april 22, 2011. yes, that's good friday. no, we didn't realize it. :)

at any rate, if you can make it out, we'd be happy as punch to have you. if not, you can always join virtually. if you're so inspired, you can come out for part of it and join virtually for the other part. anyway, all the details and registration links are at http://www.atlsmug.org/1/post/2011/04/atlanta-systems-management-user-group-4222011.html.

here's a real quick agenda:

Presenters
Agenda
Time Start
Time End

Breakfast
9:30
9:50
ATLSMUG
Opening
9:50
10:00
Dan Newton
v.Next Year – What You Need to Know About the Upcoming 2012 System Center Releases
10:05
11:00
Greg Cameron
Delivering and Managing the Private Cloud with System Center 2012
11:05
12:00

Lunch Break
12:05
12:25
John Rush (Shavlik)
3rd-Party Software Patching Using SCCM and Shavlik SCUPdates
12:30
1:30
Duncan McAlynn
DCM vs ACS: An Audit Compliance Comparison
1:35
2:25

Break
2:30
2:40
Baelson Duque / Barry Shilmover
Operations Manager 2012 Overview
2:45
4:05
ATLSMUG
Closing
4:05
4:15

sccm: client stuck downloading package with bit .tmp files in cache directory

honestly, it's very early (by my standards). my creativity is not quite awake yet hence the very bad subject name of this post. I can't really find a good error message that would capture the essence of this problem. so... I guess you'll just have to read my rambling instead.

let's get started. when this problem spurs up it looks as if the client is attempting to download the package but never gets anywhere with it. what's the first thing any sccm admin does? read logs, yes. one of the best ways I've found of reading logs is to start by running a search against the logs directory and dumping out anything matching the package id, advertisement id, etc to a new txt file. this is what I found.

in the cas.log, the client is clearly getting the policy and location of the package.

Matching DP Location found 0 - \\mySMSServer\SMSPKGC$\XYZ00017\
Requesting content XYZ00017.1, size(KB) 60833, under context S-0-0-00-1111111111-1111111111-111111111-111111 with priority Low
Target location for content XYZ00017.1 is C:\WINDOWS\system32\CCM\Cache\XYZ00017.1.S-0-0-00-1111111111-1111111111-111111111-111111

datatransferservice.log is pretty busy too setting up directories.

Created directory 'C:\WINDOWS\system32\CCM\Cache\XYZ00017.1.S-0-0-00-1111111111-1111111111-111111111-111111\Lang/zh-TW'.

I think that gives us enough to go on. there's stuff in execmgr.log and policyevaluator.log, but we're all familiar with those by now. instead, I started looking at the conditions about this problem. so far I knew that the client was getting the policy and beginning the process to the point that it started creating directories but never going any further. I went to the directory highlighted above and noticed that the entire directory structure was there but no actual content. instead, there were a bunch of BIT*.TMP files.

I happened across this post on technet which explained quite vividly the problem at hand. I looked at the iis logs to determine if there was a transfer problem to the client. I found this in the logs (again by searching for the package id):

2011-04-14 15:16:28 192.168.0.2 HEAD /SMS_DP_SMSPKGC$/XYZ00017/Graphics/Gfxres.he-IL.resources - 80 - 192.168.0.3 Microsoft+BITS/6.7 404 7 0 60

okay, now we're getting somewhere. what do we with that though? well, the http codes specific to iis7 are in this kb article. in this case, I was looking for 404.7. here's how it maps back:

...
404.3 - MIME type restriction.
404.4 - No handler configured.
404.5 - Denied by request filtering configuration.
404.6 - Verb denied.
404.7 - File extension denied.
404.8 - Hidden namespace.
404.9 - File attribute hidden.
...

so, it's not your usual 404 (not talking about the ATL either).

as it turns out, the applicationHost.config file contains a section called <requestFiltering> that blocks certain extension types. by modifying this section of the .config file, it will allow the transfer of these file types over http -- exactly what BITS is using to transfer.

...
<requestFiltering>
<fileExtensions allowUnlisted="true" applyToWebDAV="true">
<add fileExtension=".asax" allowed="false" />
<add fileExtension=".ascx" allowed="false" />
<add fileExtension=".master" allowed="false" />
<add fileExtension=".skin" allowed="false" />
<add fileExtension=".browser" allowed="false" />
<add fileExtension=".sitemap" allowed="false" />
<add fileExtension=".config" allowed="false" />
<add fileExtension=".cs" allowed="false" />
<add fileExtension=".csproj" allowed="false" />
<add fileExtension=".vb" allowed="false" />
<add fileExtension=".vbproj" allowed="false" />
<add fileExtension=".webinfo" allowed="false" />
<add fileExtension=".licx" allowed="false" />
<add fileExtension=".resx" allowed="false" />
<add fileExtension=".resources" allowed="false" />
<add fileExtension=".mdb" allowed="false" />
<add fileExtension=".vjsproj" allowed="false" />
<add fileExtension=".java" allowed="false" />
<add fileExtension=".jsl" allowed="false" />
<add fileExtension=".ldb" allowed="false" />
<add fileExtension=".dsdgm" allowed="false" />
<add fileExtension=".ssdgm" allowed="false" />
<add fileExtension=".lsad" allowed="false" />
<add fileExtension=".ssmap" allowed="false" />
<add fileExtension=".cd" allowed="false" />
<add fileExtension=".dsprototype" allowed="false" />
<add fileExtension=".lsaprototype" allowed="false" />
<add fileExtension=".sdm" allowed="false" />
<add fileExtension=".sdmDocument" allowed="false" />
<add fileExtension=".mdf" allowed="false" />
<add fileExtension=".ldf" allowed="false" />
   <add fileExtension=".ad" allowed="false" />
<add fileExtension=".dd" allowed="false" />
<add fileExtension=".ldd" allowed="false" />
<add fileExtension=".sd" allowed="false" />
<add fileExtension=".adprototype" allowed="false" />
<add fileExtension=".lddprototype" allowed="false" />
<add fileExtension=".exclude" allowed="false" />
<add fileExtension=".refresh" allowed="false" />
<add fileExtension=".compiled" allowed="false" />
<add fileExtension=".msgx" allowed="false" />
<add fileExtension=".vsdisco" allowed="false" />
<add fileExtension=".asa" allowed="false" />
...

in my case, it was .resources. make the change, initiate iis reset, and away you go.

Apr 7, 2011

misc: offering remote assistance in windows 7

if you're looking for how to offer remote assistance in windows 7, here's a command line you can issue to do this:

%windir%\system32\msra.exe /offerra

Apr 6, 2011

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point. even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch.

I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.

 

DETECTION

status messages

take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough material to blog so I didn't bother to capture the exact errors. :/ anyway, here are the specific elements:

type: error
component: software distribution
message id: 10057
detail: The program for advertisement "<advertisement id>" has failed because download of the content "<package id> - "<program name>" has failed. The download failed because the content downloaded to the client does not match the content specified in the content source. Possible causes: The content on the distribution point has been manually modified, or a local administrator on the computer has modified the content in the computer's hash. Solution: Refresh the content on the distribution point and retry the download.

and one more...
type: error
component: software distribution content access
message id: 10030
detail: Download of the content "<package id>" - "4" has failed. The download failed because the content downloaded to the client does not match the content specified in the source. Possible causes: The content on the distribution point has been manually modified, or a local administrator on the computer has modified the content in the computer's hash. Solution: Refresh the content on the distribution point and retry the download.
 
hashdir

ah, the elusive tool known as hashdir.exe. this tool analyzes a directory and returns the hash value. I used this to analyze the source directory and checked against a child site to determine if the hash result was properly returned. this is the usage info:

Usage: HashDir DirectoryName HashVersion [/FileList]
Example: HashDir d:\smspkgd$\FS100003 2
HashVersion: RTM = 1 and SP1 = 2

I assume anyone reading this post is on configmgr 2007 sp1 or greater, in which case, you want to use the value of 2 as shown in the example. by checking the source directory against a smspkg directory, you can get a very good sense of whether or not your files are out of sync. if you need hashdir, click HERE.

sql query

if you refer to the post I mentioned at the beginning of this one, you'll find a sql query that will (hopefully) enumerate where problems may exist. I referenced the dx21 method, but in truth, as it turns out after much later discovery, I should have credited the genius steve rachui.

SELECT * FROM PkgStatus WHERE Type=1 AND Status=1 AND ID='XYZ00234'

 

CORRECTION

hidden files

I wasn't aware that this could occur but there are enough posts that seem to indicate that hidden files will cause the hash to potentially mismatch. make sure you examine your source files to ensure they are not set with a hidden flag.

binary delta replication

sometimes this setting doesn't seem to play well with certain files. one quick remediation would be to turn this off and then update the package.

remove the pck file

at the child site, if you manually delete the pck file, the child site will have no choice but to bring down the pck file at the parent site. it's okay if you have ONE child site but loses its fun factor when you multiply it by a number of systems. for this reason, I do not recommend this... but alas it works.

modify pkgstatus and sourceversion in pkgstatus table

I mentioned that earlier post a couple of times. there's a sql query in there that will update the required fields for a package server. don't bother trying to set the value against a lot of different servers at once because it will fail. :) I don't know how I know this. I just do. trust me. so what do you do? well, I wrote this sql query that I kind of borrowed the guts out of from a post about opalis. it's a while loop that will go through a list of servers. it's kind of cool.

declare @str_objname sysname,
@str_execsql nvarchar(256),
@str_packageid nvarchar(10)

set     @str_packageid = '<package id here>'

declare cur_SCCMDPs cursor for
SELECT PkgServer FROM PkgStatus WHERE ID=''+@str_packageid+'' AND TYPE=1

open cur_sccmdps

fetch cur_sccmdps into @str_objname

while @@FETCH_STATUS = 0
begin
set @str_execsql = 'WAITFOR DELAY ''00:00:02'' update PkgStatus set Status=2, SourceVersion=0 where ID= '''+@str_packageid+''' and Type=1 and PkgServer = '''+@str_objname+''''
print @str_execsql
exec sp_executesql @statement = @str_execsql
fetch cur_sccmdps into @str_objname
end


deallocate cur_sccmdps

there's a wait delay on it so that while it's updating the records for each row, it'll have a chance to process it. I don't know if it's required but seemed like a good idea.

 

NOTES

it's critical to understand when to update versus refresh a package. if you make source file changes, you should be updating the package as refreshing will only refresh the contents to distribution points from the existing compressed files (.pck). refreshing is really only useful when you need to "repair" a distribution point. you'll notice that only updating will increment the package source version. more details are in this technet article.

another import thing to understand is what happens when you change a package from using compressed copy to source folder or vice versa. while the distribution points at the parent site will update, the child sites will not. this will most certainly cause a hash mismatch problem. any change like this requires that you update the distribution points. it's detailed in this technet article.

finally, if you are interested in what sccm is telling the world is the hash of the source, you can run this sql query to return the results of a package:

select PkgID, Name, Hash, NewHash from SMSPackages where PkgID = '<packageid>'

 

don't forget to TEST, TEST, TEST. what works for me may not work for you.

Feb 9, 2011

how to use dropbox to synchronize windows 7 sticky notes

you may remember awhile back, I wrote up some steps on how to use windows live mesh to achieve sticky notes synchronization. live mesh, sync, and now mesh again, was a great product to use for this purpose because you could point to the folder and tell it to sync it. unfortunately, I didn't find it very reliable and by the comments I read, neither did a lot of my readers.

this morning, I got quite frustrated by a couple of things going on. first of all, no sync! second, digsby! sticky notes and digsby are two things I've come to rely on. there's some talk about a protocol change (or using old protocols or something like that) I read that pointed to digsby's msn having connection problems while using windows live mesh. while live mesh was running, msn would drop off. according to their blog, it's a known issue and will be revised in a future release.

anyway, so here I am. I decided to get rid of mesh in favor of something I've been using awhile and have come to love: dropbox. the challenge with dropbox was that I had to find a way to synchronize the content (yeah that's easy I know) without having the content only in the dropbox folder.

the first thing I thought of doing was writing a script to copy the dropbox content on sync detection. this seemed like OVERKILL when I found someone else's brilliant solution: ntfs junctions.

 

configuring dropbox and junctions

this is a rundown of the steps involved, which is pretty dang easy. so let's get started...

  • close your sticky notes application. make sure it's closed. if it's still running, you'll see the StikyNot.exe process running.
  • create a new directory called "sticky notes" somewhere in your dropbox folder. I just put mine in the root for ease.
  • navigate to the "sticky notes" directory on your computer, where you'll find a file called StickyNotes.snt. (the directory path is c:\users\<userid>\appdata\roaming\microsoft\sticky notes)
  • copy the StickyNotes.snt file to the dropbox folder you created.
  • now get rid of the original "sticky notes" directory.
  • from a cmd prompt, type the following to create your junction:
mklink /J "%APPDATA%\Microsoft\Sticky Notes" "<Full Dropbox Path>\Sticky Notes"

  • once you open up sticky notes again, it'll be pointing to the file in your dropbox folder courtesy of the junction you just created. :)

if you open the path to the original stick notes directory, you'll notice there's a shortcut icon on the folder to indicate it's a junction. if you open up the folder, you'll find yourself in your dropbox folder.

image

okay, now you're syncing sticky notes to dropbox... but what about another computer?  simple.  just run through the same steps above, except this time, don't worry about copying your stickynotes.snt file since it already exists in dropbox.

 

notes

there's one caveat I should mention. the synchronization can't take place while sticky notes is running. you'll want to close your sticky notes application every now and then so that dropbox has a chance to sync your changes.

this applies to computers receiving the files from dropbox as well. as long as sticky notes is running, it won't be able to sync the contents.

Jan 7, 2011

powershell: naming functions and cmdlets

creating stuff in powershell and are curious about how to name it?  you already know the verb-noun format, but what are the right verbs to use?

use the get-verb cmdlet!

Verb                            Group
---- -----
Add Common
Clear Common
Close Common
Copy Common
Enter Common
Exit Common
Find Common
Format Common
Get Common
Hide Common
Join Common
Lock Common
Move Common
New Common
Open Common
Pop Common
Push Common
Redo Common
Remove Common
Rename Common
Reset Common
Search Common
Select Common
Set Common
Show Common
Skip Common
Split Common
Step Common
Switch Common
Undo Common
Unlock Common
Watch Common
Backup Data
Checkpoint Data
Compare Data
Compress Data
Convert Data
ConvertFrom Data
ConvertTo Data
Dismount Data
Edit Data
Expand Data
Export Data
Group Data
Import Data
Initialize Data
Limit Data
Merge Data
Mount Data
Out Data
Publish Data
Restore Data
Save Data
Sync Data
Unpublish Data
Update Data
Approve Lifecycle
Assert Lifecycle
Complete Lifecycle
Confirm Lifecycle
Deny Lifecycle
Disable Lifecycle
Enable Lifecycle
Install Lifecycle
Invoke Lifecycle
Register Lifecycle
Request Lifecycle
Restart Lifecycle
Resume Lifecycle
Start Lifecycle
Stop Lifecycle
Submit Lifecycle
Suspend Lifecycle
Uninstall Lifecycle
Unregister Lifecycle
Wait Lifecycle
Debug Diagnostic
Measure Diagnostic
Ping Diagnostic
Repair Diagnostic
Resolve Diagnostic
Test Diagnostic
Trace Diagnostic
Connect Communications
Disconnect Communications
Read Communications
Receive Communications
Send Communications
Write Communications
Block Security
Grant Security
Protect Security
Revoke Security
Unblock Security
Unprotect Security
Use Other

Jan 5, 2011

opalis: guidance on troubleshooting failed workflows

as you move into deeper integration stories with opalis, it’s probable that you’re going to run into situations where the expected outcome isn’t quite as you dreamed.

now why is this?  it’s usually because as you write the process, it isn’t completely determined what you’ll need.  this manifests itself often for me (anyway) as security related problems.  the most common reason this occurs is that you are designing as your own account and running your workflows as the opalis action service account.

so, if you would, allow me to offer a little guidance on this.

  • repeat to yourself: i am not the opalis action account.  this has a profound effect in separating you from your delusion that the universe does not want you to succeed today.  the point really is to remember that that testing console runs as the user launching the opalis client.  here's a demonstration by pete.
  • review your audit history.  it's object specific but sometimes you can glean problems occurring by viewing the audit history such as access denied errors when using the AD object.

SNAGHTML251cf4cc[4]

  • check the system logs.  while the logs can be quite noisy, it can provide a lot of value if the object problems are being logged.  how do you know if they are?  check!  this post has the log locations.  the policymodule*.log files are the ones to read.
  • minimize confusion by documentation.  document the permissions required for the action server with passionate fervor!  (duh)  i can't even begin to explain how many times i was changing things in the wrong place.
  • measure once, cut twice or measure twice, cut once – just measure.  go back and assess the various places where a credential conflict might cause problems.  generally, when connections are defined, you define the credentials as well to connect to those environments.  while these may execute as the defined account (connections to opsmgr for example), other objects that were not defined will execute as you (in the testing console) and as the opalis action service account (in running policies).
  • put on your running shoes.  if you’re working with nested workflows, chase your log histories!  review the log histories of all your workflows.  while not as valuable as being able to run them in the testing console, it still pulls back some data that can be useful.
  • execute each workflow individually if required.  sometimes log history isn't enough.  it can be useful to separate the workflows and run them through the testing console.  remember your identity crisis.  you're running these tests as YOU, not the action server.
  • append line object is your friend.  back when we scripted in that archaic language known as vbscript, there was this concept of echoing.  you chiseled commands like wscript.echo "i am so cool" into a rock wall and inside of your cmd shell, out it would come (and coincidentally, inside the cmd shell is also where you're cool).  this is akin to using echo statements to capture places that you are in your script.  write to log.  log often.  log is your copilot.
  • be something you’re not.  try running the opalis client as the action account. this presents its own challenges though as you will be required to grant the action account access to your policies.
  • violate all security principles.  where at all possible, just put the action account into the domain administrators and rid yourself of all the pesky permissions issues.  I AM JOKING!  do not ever do this.

for real world context, i wish to say that i ran into every one of these things.  and probably twice.  (except the last one.  thought about it.  a lot.  not dumb enough to do it though).