Date: Wed, 3 Oct 2007 13:23:39 -0500 (CDT) From: Marco Mambelli To: mwt2-core-l@LISTSERV.INDIANA.EDU Subject: OSG env drill-down Hi all, The environment is copied by configure-osg.sh in monitoring/osg-job-environment.conf (automatically generated) additional local env should be set in osg-local-job-environment.conf This is parsed in JobDescription.pm as below: # We override the autohandler for environment so we can tack on # stuff from osg-attributes.conf sub environment ... my %result = (); # map key to value if ( exists $self->{'_grid3_info'} && ref($self->{'_grid3_info'}) eq 'HASH' ) { # use instance knowledge - avoid reading the file again %result = %{ $self->{'_grid3_info'} }; } else { my %preset = ( %ENV ); # as meager as it may be # no previous knowledge, need to read the file my $fn = File::Spec->catfile( $ENV{'GLOBUS_LOCATION'}, '..', 'monitoring', 'osg-job-environment.conf' ); if ( open( INFO, "<$fn" ) ) { my ($k,$v); while ( ) { ($k,$v) = parse_osg_attributes_line($_); next unless defined $k; # substitute and unquote the value, remember it $result{$k} = $preset{$k} = trim( $v, \%preset ); } close INFO; } # Now do the same thing for the "local" file my $local_fn = File::Spec->catfile( $ENV{'GLOBUS_LOCATION'}, '..', 'monitoring', 'osg-local-job-environment.conf' ); if ( open( INFO, "<$local_fn" ) ) { my ($k,$v); while ( ) { ($k,$v) = parse_osg_attributes_line($_); next unless defined $k; # substitute and unquote the value, remember it $result{$k} = $preset{$k} = trim( $v, \%preset ); } close INFO; } # remember for next invocation in this instance # Note: If the file was unreadible, this is negative caching. $self->{'_grid3_info'} = { %result }; } Job managers use that as in condor.pm: @environment = $description->environment(); foreach $tuple (@environment) { if(!ref($tuple) || scalar(@$tuple) != 2) { return Globus::GRAM::Error::RSL_ENVIRONMENT(); } if(exists($library_vars{$tuple->[0]})) { $tuple->[1] .= ":$library_string"; $library_vars{$tuple->[0]} = 1; } } $description->environment() is found also in fork and managed fork. I now very little perl so I defer to someone else to analyze the detail but I think this is propagating the variable to the env of the job executing on the worker node. Previous findings were on tier2-02, on uct2-grid6 there is also pbs.pm: foreach my $tuple ($description->environment()) { if(!ref($tuple) || scalar(@$tuple) != 2) { return Globus::GRAM::Error::RSL_ENVIRONMENT(); } if(exists($library_vars{$tuple->[0]})) { $tuple->[1] .= ":$library_string"; $library_vars{$tuple->[0]} = 1; } push(@new_env, $tuple->[0] . '="' . $tuple->[1] . '"'); $tuple->[0] =~ s/\\/\\\\/g; ... So from checking the files and from the emails from other OSG site adminitrators it seems that no additional configuration should be required in order to have the OSG environment defined. I deferr to a more in depth analysis of the perl code a final answer. Thank you, Marco --------------- email with Terrence Thanks Terrence, I saw that all the jobmanagers access the array $description->environment() that is filled in JobDescription.pm. I did not checked the details of condor.pm and pbs.pm yet but that seems the way that the environment is passed to the jobs started on the worker nodes. Thank you, Marco On Wed, 3 Oct 2007, Terrence Martin wrote: > What I understand is that the GRAM grabs the contents of osg-job-environment.conf for you. This is > then store as an array in perl. I have used this fact to allow me to manipulate the environment > (adding/subtracting) from it in condor by manipulating that array. This is necessary for certain > runtime modifications the jobmanager makes. Look for the map function in condor.pm. > > The array contents are handed off to the batch system. In condor.pm the env is passed as a string > into the condor submit script. Condor then reconstructs that environment for you on the worker node. > > Note: Condor does this in such a way as to make it impossible to set > > OSG_WN_TMP=$_condor_scratch_dir > > Unless you perform the following step immediately prior to the job executing c> > export OSG_WN_TMP=$_condor_scratch_dir > > There is one other bit you need to do on the WN in order to get the full environment. > > source $OSG_GRID/setup.sh > > This step is required to gain access to all of the WN-Client components as well as WN-Client related > x509 infrastructure (ie where are the CA/CRL). > > Terrence