O R G A N I C / F E R T I L I Z E R: 06.06

Jun 29, 2006

ds: add conditional forwarders by command line

sometimes i think it's relevant to follow your own advice. of course, some lessons aren't learned by sedulous effort. often times, it requires moments of sheer languor. rtfm, rtfm, rtfm i tell myself! if you want to add conditional forwarders through command line, use this: dnscmd [servername] /zoneadd [zonename.com] /forwarder [primary ip address] [secondary ip address] the /forwarder statement is actually expressing what zone type you want (e.g. primary, secondary, etc). using /forwarder tells dnscmd that you're interested in adding conditional forwarders. this stuff rocks. by the way, this is only available on 2003 or later. here's the tfm if you're looking for all the details.

Jun 26, 2006

mom: sp_helpdb - cannot insert the value null into column

been getting any of these errors?

the system stored procedure sp_helpdb, which is used to gather information about the databases, has returned an error that may indicate that it cannot determine the db owner for the database [databasename].

here are the details:

sp_helpdb @dbname='databasename' on sql server instance: [instancename]. error number: 515, error information: [microsoft][odbc sql server driver][sql server]cannot insert the value null into column '', table ''; column does not allow nulls. insert fails.

this generally occurs when there's no owner specified for the database. executing this query will tell you if that's the case:
select name, suser_sname(sid) from master.dbo.sysdatabases where suser_sname(sid) is null
if indeed it does show up in this query, using sp_changedbowner will fix it. this will assign sa as the owner (make sure to change the database to the one you need to correct):
exec sp_changedbowner 'SA'

Jun 20, 2006

mom: dell openmanage mp has been updated

it's been about a year since their last release so i'm sure there must be some improvements. i'm profiling the management pack in mpstudio now to see how it looks. by the way, you won't find it yet on the mom catalog, but you can get it here: http://ftp.dell.com/sysman/DOMMP21_A01.exe. by the way, germany scored in the ecu v ger game in the first 5 minutes. wow!

Jun 19, 2006

ds: machine account password interval

you're probably familiar with default machine account password reset intervals:
  • nt 4: 7 days
  • 2000 & above: 30 days
some additional details on this came through on the activedir.org list. it's pretty cool so i thought i'd share for those that aren't subscribed. unfortunately the author of this information doesn't a blog (yet). activedir.org does, however, maintain archives of the list. :) i'd link you... but that section seems unresponsive right now. at any rate, here's a snippet of the post. these are the logs generated during success, failure and offset.
  • success:
05/25 14:48:22 [SESSION] NORTHAMERICA: NlChangePassword: Doing it. 05/25 14:48:22 [SESSION] NORTHAMERICA: NlChangePassword: Flag password changed in LsaSecret 05/25 14:48:23 [SESSION] NORTHAMERICA: NlChangePassword: Flag password updated on PDC 05/25 14:48:23 [MISC] NlWksScavenger: Can be called again in 30 days (0x9a7ec800)
  • failure:
05/16 01:13:24 [SESSION] NORTHAMERICA: NlChangePassword: Doing it. 05/16 01:13:24 [SESSION] NORTHAMERICA: NlSessionSetup: Try Session setup 05/16 01:13:24 [SESSION] NORTHAMERICA: NlDiscoverDc: Start Synchronous Discovery 05/16 01:14:05 [CRITICAL] NORTHAMERICA: NlDiscoverDc: Cannot find DC. 05/16 01:14:05 [CRITICAL] NORTHAMERICA: NlSessionSetup: Session setup: cannot pick trusted DC 05/16 01:14:05 [MISC] Eventlog: 5719 (1) "NORTHAMERICA" 0xc000005e c000005e ^... 05/16 01:14:05 [SESSION] NORTHAMERICA: NlSessionSetup: Session setup Failed 05/16 01:14:05 [MISC] NlWksScavenger: Can be called again in 15 minutes (0xdbba0)
  • random offset:
05/25 15:03:22 [MISC] NlWksScavenger: Can be called again in 30 days (0x9d671aca)

ds: technet webcasts on active directory

if you're looking for webcasts to increase your knowledge on ad, check this out.

mom: looking for a training class?

i have a hard time recommending a training class for mom. this is because, historically, microsoft official curriculum sucks. the information is too vague, not very timely, and doesn't discuss real-world issues. there's a new offering that looks very promising and has had some excellent reviews. i've looked over the syllabus. it looks very complete. it's a 4-day crash course on everything you need to know about mom and will bring your level of understanding much higher than what the MOC class could ever do. it's also taught by mom consultants and know their ... stuff. anyway, there's a class coming up in Atlanta! maybe i'll see you there. here's the details.

mom: tracking down duplicate notifications

while i was out at teched, a reader sent me an email on how to track down duplicate notifications. this was pretty fresh in memory since i had just gone through the same ordeal explaining to another group here why they received duplicated emails. now that i have the exact details at my disposal, i can relay them here with some manner of lucidity. (i hope anyway. still trying to get back into work mode ... and for some reason, someone brewed the old, nasty corporate coffee instead of the new, aromatic seattle's best. ah well...) the first thing to do is find the alert in the mom console. once you've isolated it, check the history tab of the alert. you might see something similar to this:
Alert is created in management group myMgmtGroup. === 6/01/2006 08:20:03 === The server side response 'notify group: Network Administrators' triggered by rule 'Send notification for any Alerts with a severity of "Error" or Higher' (DF7DA784-D7D8-4FC5-8109-04AB00A1B511) is executed after alert suppression. === 6/01/2006 08:20:03 === The server side response 'notify group: Other Network Administrators' triggered by rule 'Send notification for any Alerts with a severity of "Error" or Higher' (DF7DA784-D7D8-4FC5-8109-04AB00A1B511) is executed after alert suppression.
what's going on here? as you'll notice, two server side responses are executed. so... at least now you know why you have duplicate notifications. where they're coming from is the next logical question. once you know the rule name, they're pretty easy to find. copy off those rule guids above (uhhh, not mine exactly, your own... guid... you know, unique? get your own). issue the following command in sql query analyzer:
select name from processrule where idprocessrule = 'rule-guid'
replace rule-guid with your rule guid. now you can use that name to search for the rule in the administrator console.

Jun 9, 2006

os: i can't see the interactive cmd shell

i think i tried 5 times in a row to get cmd.exe to show up as an interactive window.
just for reference, if you're trying this, the command would looking something like this:

at HH:MM /interactive cmd.exe

right? looks right. this will spawn a cmd shell alright. you'll see it in the processes. you won't be able to access it. why is this?

it turns out that the interactive switch is indeed true. you have to be logged on at the console. rdp, terminal services, etc... they won't show it. good news is, if it's 2003, you can connect to the console session using this command:

mstsc /v:[servername] /console
:)

Jun 8, 2006

mom: all notifications for a server

pretty quick and easy. didn't even believe that it would work since it seems to defy logic (at least mine). apparently it does. once i set it up, i realized why. here's the steps:
  1. create a new rule group.
  2. create a computer group (or use an existing one).
  3. populate the computer group with the computers you want alerts for.
  4. associate the computer group to the rule group.
  5. create an alert rule.
    • only match alerts generated by rules... should be unchecked.
    • add a severity criteria if you want.
    • setup a notification response.
i've no doubt you're smarter than me, but entertain me for a bit and let me explain. the alert rule in this case, is not bound to work for rules of any given rule group. therefore, the alert rule is generic and applies to anything that matches its criteria.

Jun 7, 2006

mom: some details about mom 2005 summary reporting

get this… since the day summary reporting was rolled out, the dts package has never successfully completed. i’m talking 6 months. so, for 6 months, mom has been force fed data from the systemcenterreporting database. for those of you that may not know, summary reporting is an add-on that aggregates data points in the reporting warehouse database. by using this system, you can effectively reduce the size of your warehouse. this is by no means a comprehensive guide. just some things discovered along the way. more information is in the accompanying guide that comes with the download. to get started, here are the stored procedures that can be executed to change the behavior of summary reporting pack:
  • exec p_setweekstartday [value]
    • [value] is 1-7 (sunday through saturday, respectively)
  • exec p_setcutoffdate [yyyy-mm-dd]
  • exec p_setsamplegroomdays [value]
    • [value] must be greater than 7
  • exec p_setalertgroomdays [value]
    • [value] must be greater than 7
  • exec p_setlowpercentilevalue [value]
    • [value] 1 through 49
  • exec p_sethighpercentilevalue [value]
    • [value] 51 through 99
  • exec p_setgroombatchsize [value]
    • [value] must be greater than 10.
most of these are pretty self-explanatory and are explained in much better detail in the guide. i put them up here for my own reference whenever i need to execute one of these things without having to look up pages 12-15. :) there is one in particular that i want to talk about, which is p_setcutoffdate. this is something that you want to pay particular attention to when you setup. if you set this value too far in the past, the dts job may never complete, depending on the amount of data you have. the reason being, this value dictates where aggregation starts. in other words, where do you want the aggregation job to start looking at data points? do you want to start from 3 months in the past? expect it to fail. at any rate, don’t worry about starting it too early. just make sure that the date you start, matches the p_setweekstartday value. let me explain! the previous stuff, you know, the stuff prior to the p_setcutoffdate value, can be brought in manually. you can read in a week, a month, etc. here are the commands (watch for word wrap):
  • daily samples
    dtsrun.exe /F "[MOM 2005 Reporting installation directory]\Reporting\BuildDailySampleAggregations_AnyDays.dts" /W True /A "ServerName":"8"="YourServer" /A "DatabaseName":"8"="SystemCenterReporting" /A "StartDate":"8"="YYYY-MM-DD" /A "EndDate":"8"="YYYY-MM-DD"
  • weekly samples
    dtsrun.exe /F "[MOM 2005 Reporting installation directory]\Reporting\BuildWeeklySampleAggregations_AnyDays.dts" /W True /A "ServerName":"8"="YourServer" /A "DatabaseName":"8"="SystemCenterReporting" /A "StartDate":"8"="YYYY-MM-DD" /A "EndDate":"8"="YYYY-MM-DD"
  • alerts
    dtsrun.exe /F "[MOM 2005 Reporting installation directory]\Reporting\BuildAlertAggregations_AnyDays.dts" /W True /A "ServerName":"8"="YourServer" /A "DatabaseName":"8"="SystemCenterReporting" /A "StartDate":"8"="YYYY-MM-DD" /A "EndDate":"8"="YYYY-MM-DD"
now that we have that out of the way, why else would the dts job fail? here are the reasons:
  1. dts job is extremely cpu intensive and space intensive (tempdb).
  2. uploadaggregations table may not have a recent timestamp.

if you look in the dts job itself, the instructions are executed in parallel, instead of serial. this may not be a big deal since ordinarily, summary reporting shouldn’t take very long to run. so a small spike in cpu, in the dead of night… who cares? now if you factor this with tons of data to have to look through, then you’ve got some issues. we modified the dts job to execute in serial initially, thinking this was what was causing all the problems. not quite… refer back to #2.

this uploadaggregations table. hmmm. it’s not in accompanying guide. odd... turns out this table holds the timestamps to indicate from which date forward summary reporting pack should look at data points. remember, the p_setcutoffdate parameter only tells the dts job from which date to start. this only matters for the maiden voyage. to see what i’m talking about, issue this command in query analyzer. it won’t break anything… no, really… it won’t.

select LastDate, LastWeek, AlertLastDate, AlertLastWeek 
from dbo.uploadaggregations 

select top 1 dategenerated_pk as LastDate 
from sc_daily_counterdatafact_table 
order by LastDate desc 

select top 1 weekgenerated_pk as LastWeek 
from sc_weekly_counterdatafact_table 
order by LastWeek desc 

select top 1 dategenerated_pk as AlertLastDate 
from sc_daily_alertfact_table 
order by AlertLastDate desc 

select top 1 weekgenerated_pk as AlertLastWeek 
from sc_weekly_alertfact_table 
order by AlertLastWeek desc 

the query will list the values in the table that need your focus. the fields listed in the first table is a marker to indicate where to begin looking at data points for the dts job’s current run. if you have a value way back in the past, it’s going to go back and look.
 
to correct this, just pair up the names and dates. if the timestamps are older in the first table, modify this table to raise the timestamps to the values in their pair. for example, if lastdate in the first table is older than lastdate in the following table, then change the lastdate in the first table to match the lastdate in the second table. by the way, if you have multiple rows in the first table, that means you're sending sending more than one management group to your warehouse. :)
 
hope this helps.