Skip to main content

problems with MOMma

update: this information has been posted to an article on myitforum.com. Since implementation, it seems like the database has done nothing but grow, grow, grow. I've blamed the Exchange guys relentlessly for having a very noisy environment. No matter how many times I ran the MOMX Partitioning and Grooming job, the database would not free up any space. It turns out there are some mechanisms tied directly into grooming if you have a MOM warehouse enabled.
Here's the details. If you want to know the last time your DTS job completed successfully, you can comb through the event log on the reporting server or you can issue this command to your OnePoint database:
select * from ReportingSettings
The first column labeled TimeDTSLastRan indicates the last successful marker. Turns out if this isn't current, your grooming jobs aren't doing anything. Mine was set to the end of February. Hmmm. That'd explain the obscene growth pattern. I've run the job 5 times using the latency switch. The time stamp hasn't moved.
By the way, the job is scheduled on the reporting server. It's executed as something like this:

MOM.Datawarehousing.DTSPackageGenerator.exe /latency:20 /srcserver:OnePointDBServer /srcdb:OnePoint /dwserver:WarehouseServer /dwdb:SystemCenterReporting /product:"Microsoft Operations Manager”.

It's in the %ProgramFiles%\Microsoft System Center Reporting directory.
If you notice, there's a /latency switch. This let's you specifies what items to transfer to the warehouse. For example, 20 means anything older than 20 days old. This is useful if your DTS job is timing out because of an exorbitantly large amount of data being transferred - potentially overwhelming the transaction log, etc. Also, there's a /silent switch that you're supposed to use when issued as a scheduled task. I pulled it out to see what this job was doing exactly. In the event of a successful execution, you should see an event message like this:

The execution of the following DTS Package succeeded: Package Name: SC_Inner_DTS_Package Package Description: This package transfers data from datafoo\foo.OnePoint to foo.SystemCenterReporting Package ID: {481AA51A-8C84-42E3-9879-D228290895D0} Package Version: {24A473AA-4C8A-486B-9ED4-970D35A70047} Package Execution Lineage: {55B111CE-72ED-4231-821B-AAE321763EC5}

Well, after going through many latency switches and kicking off the groom jobs (MOMX Partitioning and Grooming), I was able to get the 15 GB DB back down to 5 GB. Interesting though, the time stamp still hasn't changed. Hmmm...

Comments

  1. Marcus,

    Point of clarification. When I run

    'select * from groomsettings' againt Onepoint it returns the date I actually installed MOM.

    However, if I run 'select * from reportingsettings', I get the last time the job actutally ran in the TimeDTSLastRun column.

    Different in your environment?

    Looking forward to your book release.

    Pete

    ReplyDelete
  2. hey pete, actually you're correct. back when i wrote this, i updated my blog post with an "update" line that didn't really seem to do much. i just now republished with the correct item. it is reportingsettings.

    thanks man!

    ReplyDelete

Post a Comment

Popular posts from this blog

using preloadpkgonsite.exe to stage compressed copies to child site distribution points

UPDATE: john marcum sent me a kind email to let me know about a problem he ran into with preloadpkgonsite.exe in the new SCCM Toolkit V2 where under certain conditions, packages will not uncompress.  if you are using the v2 toolkit, PLEASE read this blog post before proceeding.   here’s a scenario that came up on the mssms@lists.myitforum.com mailing list. when confronted with a situation of large packages and wan links, it’s generally best to get the data to the other location without going over the wire. in this case, 75gb. :/ the “how” you get the files there is really not the most important thing to worry about. once they’re there and moved to the appropriate location, preloadpkgonsite.exe is required to install the compressed source files. once done, a status message goes back to the parent server which should stop the upstream server from copying the package source files over the wan to the child site. anyway, if it’s a relatively small amount of packages, you can

How to Identify Applications Using Your Domain Controller

Problem Everyone has been through it. We've all had to retire or replace a domain controller at some point in our checkered collective experiences. While AD provides very intelligent high availability, some applications are just plain dumb. They do not observe site awareness or participate in locating a domain controller. All they want is the name or IP of one domain controller which gets hardcoded in a configuration file somewhere, deeply embedded in some file folder or setting that you are never going to find. How do you look at a DC and decide which applications might be doing it? Packet trace? Logs? Shut it down and wait for screaming? It seems very tedious and nearly impossible. Potential Solution Obviously I wouldn't even bother posting this if I hadn't run across something interesting. :) I ran across something in draftcalled Domain Controller Isolation. Since it's in draft, I don't know that it's published yet. HOWEVER, the concept is based off

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point . even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch. I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.   DETECTION status messages take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough m