Skip to main content

problem encountered using ftp-ssl with opalis (and corrected)

for a few days, i have been intermittently pulling my hair out trying to figure out why ftp-ssl with the opalis “upload file” object wasn’t working. after trying many permutations, i finally figured it out. sigh.

it required some opening ports and other stuff… but the last thing that got me was this particular setting which i’ll get to in a second. for now, let’s examine the error in the output:

Error Summary: Connection to FTP site failed
Details:
OPR-FTP(9560) v3.6.17.8 SCRIPT LOG FILE
 
Thu Sep 13 08:46:52 -- Line 6:     FTPLOGON "myftpsite" /user=xxxxxxxx /pw=************** /port=xxx /servertype=FTPSDATA /trust=ALL /timeout=30
Thu Sep 13 08:46:52             => *Logging on to <myftpsite> as SSL/FTP with secure control and data channels.
Thu Sep 13 08:46:52             => *Logon in progress...
Thu Sep 13 08:47:07             => *Change directory (CWD) failed during log on -- may need to use /allowerrors option.
Thu Sep 13 08:47:08             => *Connection to FTP site failed. [1152]
Thu Sep 13 08:47:08 -- Line 7:     IFERROR goto errorexit
Thu Sep 13 08:47:08 -- Line 14:    :errorexit
Thu Sep 13 08:47:09 -- Line 15:    LOGMSG "Error executing FTP script"
Thu Sep 13 08:47:09             => Error executing FTP script
Thu Sep 13 08:47:09 -- Line 16:    EXIT
Thu Sep 13 08:47:09             => *Exit OPR-FTP.
<** CLOSED SCRIPT LOG FILE

 

from the way this looks, during the CWD command, something failed around the log on process. this is what threw me. had i been smart enough to turn on trace logging at this point, i would have spent much less time trying to figure this out. as it were, i went through every permutation i could think of trying to figure out the magic combination. after many cycles of dumb, i discovered tracing was an option (not in the manual) and turned it on.

tracing the error revealed the following (truncated):

ReadServerResponse::read 47 bytes: 250 CWD successful. "/" is current directory.
ReadServerResponse::read 46 bytes: 150 Opening data channel for directory list.
ReadServerResponse::read 33 bytes: 425 Can't open data connection.

 

so as you can see, the misleading error indicated it was in the log on process when in actuality, the log on worked fine. now i knew i could stop screwing around with security and test some of the other options and stumbled upon the one that worked.

image

 

once again, if you look at the trace logs, it shows it here (truncated again):

ReadServerResponse::read 47 bytes: 250 CWD successful. "/" is current directory.
ReadServerResponse::read 52 bytes: 227 Entering Passive Mode (216,133,255,186,254,27)
ReadServerResponse::read 25 bytes: 150 Connection accepted
ReadServerResponse::read 17 bytes: 226 Transfer OK
ReadServerResponse::read 19 bytes: 200 Type set to A
ReadServerResponse::read 17 bytes: 226 Transfer OK

Comments

Popular posts from this blog

using preloadpkgonsite.exe to stage compressed copies to child site distribution points

UPDATE: john marcum sent me a kind email to let me know about a problem he ran into with preloadpkgonsite.exe in the new SCCM Toolkit V2 where under certain conditions, packages will not uncompress.  if you are using the v2 toolkit, PLEASE read this blog post before proceeding.   here’s a scenario that came up on the mssms@lists.myitforum.com mailing list. when confronted with a situation of large packages and wan links, it’s generally best to get the data to the other location without going over the wire. in this case, 75gb. :/ the “how” you get the files there is really not the most important thing to worry about. once they’re there and moved to the appropriate location, preloadpkgonsite.exe is required to install the compressed source files. once done, a status message goes back to the parent server which should stop the upstream server from copying the package source files over the wan to the child site. anyway, if it’s a relatively small amount of packages, you can

How to Identify Applications Using Your Domain Controller

Problem Everyone has been through it. We've all had to retire or replace a domain controller at some point in our checkered collective experiences. While AD provides very intelligent high availability, some applications are just plain dumb. They do not observe site awareness or participate in locating a domain controller. All they want is the name or IP of one domain controller which gets hardcoded in a configuration file somewhere, deeply embedded in some file folder or setting that you are never going to find. How do you look at a DC and decide which applications might be doing it? Packet trace? Logs? Shut it down and wait for screaming? It seems very tedious and nearly impossible. Potential Solution Obviously I wouldn't even bother posting this if I hadn't run across something interesting. :) I ran across something in draftcalled Domain Controller Isolation. Since it's in draft, I don't know that it's published yet. HOWEVER, the concept is based off

sccm: content hash fails to match

back in 2008, I wrote up a little thing about how distribution manager fails to send a package to a distribution point . even though a lot of what I wrote that for was the failure of packages to get delivered to child sites, the result was pretty much the same. when the client tries to run the advertisement with an old package, the result was a failure because of content mismatch. I went through an ordeal recently capturing these exact kinds of failures and corrected quite a number of problems with these packages. the resulting blog post is my effort to capture how these problems were resolved. if nothing else, it's a basic checklist of things you can use.   DETECTION status messages take a look at your status messages. this has to be the easiest way to determine where these problems exist. unfortunately, it requires that a client is already experiencing problems. there are client logs you can examine as well such as cas, but I wasn't even sure I was going to have enough m