Skip to main content

configuration manager compliance summary reports by site

TPS Reports

Image by cell105 via Flickr

if you’ll recall from my last post, i had a bit of trouble trying to figure out a way to generate reports by authorization list.  well, i got by that hurdle.  the problem is the tables i was using to generate the report weren’t really designed for running on a massive scale.  in fact, i started timing it and realized that it was taking on average about 3-4 seconds per machine.  so for an average enterprise of 10,000 machines, it would take --

( ( 10,000 * 3 ) / 60 ) / 60 = 8.33 hours

no one really found this acceptable for obvious reasons.  well, with a bit more digging, i found i could do the same kind of thing without having to aggregate the report details to generate a compliance number.  instead of using v_updatecompliancestatus, i started using v_updateliststatus_live.  is it just me or do they seem to be named inappropriately?

anyway, i created a new set of reports, taking a bit from the old and a bit from existing reports such as the one i created before.  i think it’s more robust.  certainly runs faster than before.  MUCH faster!  (don’t mind the blank spaces.  the interesting thing about web reports is that when you use temporary tables, it displays a blank area).

UPDATE: to eliminate the blank spaces, use -

  • SET NOCOUNT ON at the beginning of your statements for your temporary table
  • SET NOCOUNT OFF at the end of your query after the temporary table is filled.

thanks to sudeesh rajashekharan’s answer on this post.

this is how the report set looks.  it starts with a summary based on each site (blanked out):

image

 

clicking the link takes you to the site details report which looks like this:

image

 

this report lists out each machine, the last logged on user, and its state.  i found it relevant to add the last known scan time and the last known heartbeat.  this way scans that are old with recent heartbeats would indicate that a machine is having a problem scanning.

now clicking on an individual machine will take you to a detailed report that displays the details of each update.  it would look something like this:

image

 

the report mof is available on system center central.  (link provided at the end).  once you import them, you’ll have to link them together to get the drill-downs working.  here’s how you do it.

  • Security Compliance (Summary)
    • link to Security Compliance (Site Details)
    • authlistid - column 6
    • collid – column 7
    • siteid – column 1

image

 

  • Security Compliance (Site Details)
    • link to Security Compliance (Machine Details)
    • authlistid – column 6
    • machinename – column 1

image

 

and there you have it.  here’s the link for the report: http://www.systemcentercentral.com/Downloads/DownloadsDetails/tabid/144/IndexID/24458/Default.aspx

Comments

  1. Troy L. Martin4/30/10, 8:01 AM

    Worked like a charm...thanks Marcus!!

    ReplyDelete

Post a Comment

Popular posts from this blog

using preloadpkgonsite.exe to stage compressed copies to child site distribution points

UPDATE: john marcum sent me a kind email to let me know about a problem he ran into with preloadpkgonsite.exe in the new SCCM Toolkit V2 where under certain conditions, packages will not uncompress.  if you are using the v2 toolkit, PLEASE read this blog post before proceeding.   here’s a scenario that came up on the mssms@lists.myitforum.com mailing list. when confronted with a situation of large packages and wan links, it’s generally best to get the data to the other location without going over the wire. in this case, 75gb. :/ the “how” you get the files there is really not the most important thing to worry about. once they’re there and moved to the appropriate location, preloadpkgonsite.exe is required to install the compressed source files. once done, a status message goes back to the parent server which should stop the upstream server from copying the package source files over the wan to the child site. anyway, if it’s a relatively small amount of packages, you can

How to Identify Applications Using Your Domain Controller

Problem Everyone has been through it. We've all had to retire or replace a domain controller at some point in our checkered collective experiences. While AD provides very intelligent high availability, some applications are just plain dumb. They do not observe site awareness or participate in locating a domain controller. All they want is the name or IP of one domain controller which gets hardcoded in a configuration file somewhere, deeply embedded in some file folder or setting that you are never going to find. How do you look at a DC and decide which applications might be doing it? Packet trace? Logs? Shut it down and wait for screaming? It seems very tedious and nearly impossible. Potential Solution Obviously I wouldn't even bother posting this if I hadn't run across something interesting. :) I ran across something in draftcalled Domain Controller Isolation. Since it's in draft, I don't know that it's published yet. HOWEVER, the concept is based off

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point . even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch. I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.   DETECTION status messages take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough m