XSEDE Unicore 6.6.0p3 Server Deployment Instructions

Table of Contents

Background

Deployment

Unicore downloads

Unicore tarfiles for XSEDE

XSEDE UNICORE core server bundle 6.6.0 p3

Additional Documentation

Unicore Website
  http://www.unicore.eu

Unicore 6 documentation:
  http://www.unicore.eu/documentation/manuals/unicore6/

Unicore 6 architecture documentation:
  http://www.unicore.eu/unicore/architecture.php

Unicore 6 Overview presentation:
  http://www.unicore.eu/documentation/presentations/unicore6/files/General_Introduction_to_UNICORE.pdf

Unicore 6 Installation Guide:
  http://www.unicore.eu/documentation/manuals/unicore6/manual_installation.html

Unicore 6 manuals (client and server):
  http://www.unicore.eu/documentation/manuals/unicore6/

Unicore client tutorials:
  http://www.unicore.eu/documentation/tutorials/unicore6/

XSEDE Specific Support

For XSEDE specific questions about this document or support  mailto:ops-sp-software@xsede.org

Supported Platforms

Unicore 6 core server bundle is Java and Perl based and should support just about any Unix/Linux version with the required prerequisites.

TCP/IP Port and Firewall Information

The Unicore servers communicate over the following default ports:

Unicore Servers:

Keystores and Truststores (New with 6.6.0p3)

Unicore can now make use of the Globus /etc/grid-security/certificates files for CA truststores and CRL lists.

CA Certificates
The trusted CA certificates for UNICORE can be in pem format and you can point to the /etc/grid-security/certificates/*.0 files that XSEDE SPs already have for Globus. This is new in 6.6.X and running the xsede-config-setup.sh script will handle this setup for you. The XSEDE CA truststore for UNICORE can be found at http://software.xsede.org/security/xsede-certs.tar.gz This should be put in /etc/grid-security/xsede-certs.jks.

Service Certificates
Globus has host certificates and host keys for setting up SSL encryption. The equivalent in UNICORE is to have service certificates for each service. The location of the service certificates is defined in the configuration files of the services and the service certificates must be issued from a CA in the trust store. These are usually password protected and are specified using the "alias" or "friendlyName" in the keystore. These should also go in the /etc/grid-security directory with a suggested name of {hostname}-unicore.p12 and should be in PKCS12 format with a protecting password and an alias name set (the value of the name can be anything that makes sense). An XSEDE SP can get these service certificates by submitting an XSEDE ticket to help@xsede.org requesting a UNICORE service certificate with the fqdn hostname of the server where the UNICORE service will run specified.

User Certificates
The user certificates in Globus are X509 PEM format files and Unicore user certificates are X509 certificates either in a JKS or the more common PKCS12 format (.p12). Unicore usually calls the .p12 files "keystores".

Using Existing XSEDE MyProxy Certificates
A myproxy user certificate can be used as a Unicore user certificate using the following instructions:

Use the myproxy-logon-unicore command that is in the XSEDE ucc tarball in the bin directory
OR
Perform it manually with these steps:
To generate a user certificate in PKCS12 format suitable for Unicore from a myproxy certificate issued by a myproxy-server command do the following:

% myproxy-server -l victorh [-t time_in_hours] [-s servername]
Enter MyProxy pass phrase: XXXXXXXXXX
A credential has been received for user victor in /tmp/x509up_u515.
victor@Kraken$ openssl pkcs12 -export -in /tmp/x509up_u515 -out ~/.ucc/victor-myproxy-nics.p12
Enter Export Password:
Verifying - Enter Export Password:

Then use this PKCS12 file by specifying it in the UCC preferences file as the user keystore.

victor@krakenpf2:/nics/a/home/victor> cat .ucc/preferences
keystore=/nics/a/home/victor/.ucc/victor-myproxy.p12
password=xxxxxxxxxxx
storetype=pkcs12
registry=https://narwhal.nics.utk.edu:8080/REGISTRY/services/Registry?res=default_registry
#registry=https://hexapuma.ncsa.illinois.edu:8080/NCSA/services/Registry?res=default_registry
#registry=https://narwhal.nics.utk.edu:8080/REGISTRY/services/Registry?res=default_registry https://hexapuma.ncsa.illinois.edu:8080/NCSA/services/Registry?res=default_registry
truststore=/nics/a/home/victor/.ucc/truststore-xsede.jks
truststorePassword=xxxxxxxxxxxx
log4j.appender.A1.File=/nics/a/home/victor/ucc.log

Viewing a PKCS12 keystore

To view the contents of a PKCS12 keystore file do the following:

victor@kraken$ openssl pkcs12 -in .ucc/victor-myproxy-nics.p12 -info
Enter Import Password:
MAC Iteration 2048
MAC verified OK
PKCS7 Encrypted data: pbeWithSHA1And40BitRC2-CBC, Iteration 2048
Certificate bag
Bag Attributes
    localKeyID: F8 46 7F CF 11 48 65 89 DF 00 E0 EE 67 74 40 EB 36 85 72 8F
subject=/DC=EDU/DC=TENNESSEE/DC=NICS/O=National Institute for Computational Sciences/CN=Victor Hazlewood
issuer=/DC=EDU/DC=TENNESSEE/DC=NICS/O=National Institute for Computational Sciences/CN=MyProxy
-----BEGIN CERTIFICATE-----
MIIEHTCCAwWgAwIBAgIBGzANBgkqhkiG9w0BAQUFADCBkDETMBEGCgmSJomT8ixk
ARkWA0VEVTEZMBcGCgmSJomT8ixkARkWCVRFTk5FU1NFRTEUMBIGCgmSJomT8ixk
ARkWBE5JQ1MxNjA0BgNVBAoMLU5hdGlvbmFsIEluc3RpdHV0ZSBmb3IgQ29tcHV0
YXRpb25hbCBTY2llbmNlczEQMA4GA1UEAwwHTXlQcm94eTAeFw0xMTA2MjgyMDM1
MjFaFw0xMTA3MDkyMDQwMjFaMIGZMRMwEQYKCZImiZPyLGQBGRMDRURVMRkwFwYK
CZImiZPyLGQBGRMJVEVOTkVTU0VFMRQwEgYKCZImiZPyLGQBGRMETklDUzE2MDQG
A1UEChMtTmF0aW9uYWwgSW5zdGl0dXRlIGZvciBDb21wdXRhdGlvbmFsIFNjaWVu
Y2VzMRkwFwYDVQQDExBWaWN0b3IgSGF6bGV3b29kMIGfMA0GCSqGSIb3DQEBAQUA
A4GNADCBiQKBgQDDdt0je9Yww2zVoOrE3pBe44gni4yKGp/40fq1iyS4cz15hAcx
BYP/eIC5H6h6+N+KsRy8eknLmsUYy7VMLyo9pSr/TCDFh0z31/6hhT7fnDr8t4lz
NfZagNlj7UjjUCVy4wz8CZ3ydy/4Nv5+asb9d+lPuOh5uO8dTM2xto+4oQIDAQAB
o4H6MIH3MA4GA1UdDwEB/wQEAwIEsDAdBgNVHQ4EFgQU2C2iNhSZb9gNfLAiPEBf
DrgE2GwwHwYDVR0jBBgwFoAU78R9SW6e2xlJBYv8keZnWODHXp4wDAYDVR0TAQH/
BAIwADA0BgNVHSAELTArMAwGCisGAQQBDQELAgEwDAYKKoZIhvdMBQICAzANBgsq
hkiG90wFAgMCATBhBgNVHR8EWjBYMFagVKBShlBodHRwOi8vd3d3Lm5pY3MudGVu
bmVzc2VlLmVkdS9zaXRlcy93d3cubmljcy50ZW5uZXNzZWUuZWR1L2ZpbGVzL2Nh
L2RjNzUzNDFmLmNybDANBgkqhkiG9w0BAQUFAAOCAQEAEoHGAy5OK+uchrBI6YND
byvr4Ln9tJlcCDzla9GNl0BdrQq9A1dCZaS6ngrsYd4PLePyLowANbLN+HLlLZY9
TdYzZx3/jr3v/pbAGUVVFlhMimooLszww+WyNLXkwKLf9WdnUCJX5X/3gW5F+S9C
EIItPqKr+Wnf40st30XVNTUMBK7HMfpvKXQ6YsBv7auXtPXZjEOLcOVBT1m6ubgI
TBXjRPiklSB8VmX39xEYSDipWA97/vkVO8yNZrPTegUSPzF46UmqDOEMM2fqZlBU
GqECVBx9rdIO7z/0BqOS2gRo1kJymRiR8MlQRh3CmB8wT+ry0ki9rm3OlYR6hil/
DQ==
-----END CERTIFICATE-----
PKCS7 Data
Shrouded Keybag: pbeWithSHA1And3-KeyTripleDES-CBC, Iteration 2048
Bag Attributes
    localKeyID: F8 46 7F CF 11 48 65 89 DF 00 E0 EE 67 74 40 EB 36 85 72 8F
Key Attributes: 

Software Installation

Pre-requistes

Unicore software requires the following:

  • Java 6 JRE or SDK or later. Recommended is using SUN or OpenJDK Java 6
  • Perl 5.8.8 or greater
  • Python 2.4 or greater

    Install user and file ownership

    All Unicore software except the TSI should be installed as a non-root user, usually 'unicore'. The TSI service needs to run as root in order to switch user to run as the end user for job submission, file access, etc.

    Pre-Install location

    The server software is to be downloaded and untarred into a directory that will we call the "PREINSTALL" directory. The software is then configured by setting the appropriate parameters in the configure.properties file and "installed" into the "INSTALL_PATH" directory specified by the "INSTALL_PATH variable in the configure.properties file. See the configure.properties file for more information. XSEDE has provided an xsede-config-setup.sh script to make the installation easy and consistant across XSEDE SPs.

    It is recommended that the UNICORE server software be installed in non-NFS local disk. The installation directory includes several control files and log files which are read and written frequently. Use NFS for the installation directory at your own risk.

    Pre-Installation Steps

    Prior to installation you need to understand the UNICORE servers that are necessary for your installation and select the appropriate hosts for the configuration you will be targeting. For XSEDE we are recommending each site install a Gateway, Unicore/X and TSI. The XUUDB is no longer needed as it was replaced with the use of the grid-mapfile instead of the XUUDB.
    Configuring gridmap file support In the unicorex/conf/uas.config file, add the following lines:
    Note: Please substitute location of the grid map file you will be using instead of the LOCATION_OF_GRIDAPFILE variable in the code snippet below
        container.security.attributes.order= GRID-MAPFILE
        container.security.attributes.GRID-MAPFILE.class=eu.unicore.uas.security.gridmapfile.GridMapFileAuthoriser
        container.security.attributes.GRID-MAPFILE.file={LOCATION_OF_GRIDMAPFILE}
    
    If you have more than one compute resource there should be one Unicore/X and one TSI for each compute resource. XSEDE has an XSEDE registry which all sites will register with and there is no need to run a local registry at the sites.

    See the Unicore installation Guide "Basic Scenarios" section to help understand this pre-installation task.

    You will need to configure a unicore user (or substitute) available on all the systems where the Unicore services will be installed.

    Installation Overview

    The basic installation process will be essentially the following steps:
    1. Download the Unicore core server tarball
    2. Run the xsede-config-setup.sh script first to create a configure.properties file. You can customize it according to your deployment needs, but generally the xsede-config-setup.sh script will provide all the basic configuration needed. Review the configure.properties file to make sure the values are set appropriately for your site's UNICORE installation, especially check the server certificate files and password values to make sure they are the ones you want.
    3. Obtain and configure security keystores and truststore for the components
    4. Configure and install the Gateway, UNICOREX and TSI
    5. Start the servers
    6. Note: For first time installations repeat steps 2-6 as necessary until you get the configuration correct and tests complete successfully

    Installation Details

    This installation assumes running UNICORE services run on two systems. One system for the TSI service and one system for the Gateway and UNICORE/X. If you are running the Gateway, UNICOREX and TSI all on the same host skip step 6 and steps 10-12
      Part 1: Server Software Installation (Gateway, UNICORE/X)

    1. Install prerequisites (openjdk and perl 5.8.8) and make sure unicore user is created on target systems
    2. Download XSEDE Unicore core server software tarfile
    3. Untar the unicore package in a PREINSTALL directory
    4. Obtain the SSL service certificates and Certificate Authority files. You will need to get the service certs by submitting a ticket if you cannot generate your own certs trusted by XSEDE. The TSI does not use SSL so no service cert is needed for the TSI (it is optional).
    5. Run the xsede-config-setup.sh script and answer the questions. When complete the configure.properties file should be setup with the XSEDE specifications and your local specifications you specified in the answers to the script.
      Note: Once you get UNICORE working, be sure and make an escrow copy of configure.properties.
    6. If the TSI is not on the same host as the Gateway and UNICOREX, edit the configure.properties file and set "tsi=false".

    7. Run the script PREINSTALL/configure.py with these options
      PREINSTALL/configure.py {unicore username} {hostname}
      
    8. Then run PREINSTALL/install.py to install software. This installs all configured software to the directory specified by INSTALL_DIR.

    9. If your TSI is on a different host copy the entire contents of the PREINSTALL directory to this host, then setup the services on the TSI host. First modify configure.properties for this install. Set

      gateway=false, unicorex=false, tsi=true, registry=false, and xuudb=false

      Also double check you selected the most appropriate tsi (tsiSelected) for your system (example: tsiSelected=tsi/linux_torque) Note: The default is to not configure SSL for the TSI. XSEDE recommends SSL is not necessary on site between the UNICOREX and TSI services.
    10. (On the tsi service host) run PREINSTALL/configure.py {unicore username} {hostname}
    11. (On the tsi service host) run PREINSTALL/install.py to install software (based on configure.properties values) This installs all tsi software to the directory specified by INSTALL_DIR in configure.properties
    12. Create the directory specified by the uxTSIWorkingDirectoriesBasedir variable in configure.properties on the TSI server. This directory needs to have permissions 1777
    13. Also check the following unicorex/conf/xnjs_legacy.xml values. These configuration file values are specified in the unicorex/conf/xnjs_legacy.xml file on the unicorex server and must exist on the TSI side. Therefore, if you performed the software installation in two steps because the gateway and unicorex are on a different server than the TSI server then you will need to manually update these values in the unicorex/conf/xnjs_legacy.xml file. Just double check that the below unicorex/conf/xnjs_legacy.xml directory and scripts actually exist on the TSI side.
      In the below NICS example, values start with /scratch/xsede/unicore-6.6.0
          < eng:Property name="XNJS.filespace" value="/scratch/xsede/unicore-6.6.0/FILESPACE"/>
          < eng:Property name="CLASSICTSI.TSI_LS" value="/scratch/xsede/unicore-6.6.0/tsi_torque/perl/tsi_ls"/>
          < eng:Property name="CLASSICTSI.TSI_DF" value="/scratch/xsede/unicore-6.6.0/tsi_torque/perl/tsi_df"/>
          < eng:Property name="scp-wrapper.sh" value="/scratch/xsede/unicore-6.6.0/tsi_torque/perl/scp-wrapper.pl"/>
      
    14. Next, add the PSC backup registry to your unicorex config. In unicorex/conf/uas.config file, in addition to the
      	< container.externalregistry.url=https://unicore-registry.nics.utk.edu:8080/REGISTRY/services/Registry?res=default_registry />
      
      entry, add the PSC registry as the backup registry:
      	< container.externalregistry.url.2=https://unicore.psc.xsede.org:8080/REGISTRY/services/Registry?res=default_registry />
      

      Part 2: Configuration and Security Settings

    15. Configure the TSI software settings.

      The following gives examples for selecting Torque as the resource manager.

      Edit and verify settings in INSTALL_PATH/tsi_torque/perl/tsi and INSTALL_PATH/tsi_torque/conf/tsi.properties. If using linux_torque as the TSI check specifically that the $pbs_bin_dir is the correct directory to your Torque (PBS) commands and change it as necessary. Double check all the settings in the tsi.properties file.

      Execute "PREINSTALL/tsi/Install_permissions.sh INSTALL_PATH/tsi_torque" after update to set TSI directory permissions (since it is running as root).

      unicore@grid:/scratch/xsede/unicore-servers-6.6.0/tsi> ./Install_permissions.sh /scratch/xsede/unicore-6.6.0/tsi_torque/
      
      Restricting permissions of directory /scratch/xsede/unicore-6.6.0/tsi_torque/ to read only for the owner but executable for world (needed for tsi_ls).
      
      Restricting permissions of scripts in directory /scratch/xsede/unicore-6.6.0/tsi_torque/bin to be executable for the owner
      
      Making tsi_ls and tsi_df world readable again.
      Making helper scripts world executable again.
      
      Showing permissions: "ls -al /scratch/xsede/unicore-6.6.0/tsi_torque/"
      total 24
      dr-x--x--x 6 unicore unicore 4096 2012-07-24 11:48 .
      drwxr-xr-x 4 unicore unicore 4096 2012-07-24 11:50 ..
      dr-x--x--x 2 unicore unicore 4096 2012-07-24 11:48 bin
      dr-x--x--x 2 unicore unicore 4096 2012-07-24 12:03 conf
      dr-x--x--x 2 unicore unicore 4096 2012-07-24 11:48 logs
      dr-x--x--x 2 unicore unicore 4096 2012-07-24 12:02 perl
      
      ######################################################################
      Check that all parent directories of /scratch/xsede/unicore-6.6.0/tsi_torque/ are world executable.
      Otherwise the tsi_ls script cannot be executed.
      ######################################################################
      
    16. Configure target system settings in unicorex/conf/simpleidb especially the items under TargetSystemProperties and IDBApplication. Double-check all these values and set them as necessary.

    17. Check and configure the query interval for status checking (usually with qstat) by updating the unicorex/conf/xnjs_legacy.xml file
          < eng:Property name="CLASSICTSI.statusupdate.interval" value="3000">
      
      The value is in milliseconds and the default is 3 seconds. Change to a value appropriate for your site.

    18. Create the grid mapfile that you will be using for UNICORE. It was specified in step 9 above. Note that UNICORE versions prior to 6.6.0 had an issue that the format of the DNs has the "/C=US" required to be listed at the end of the DN instead of the beginning which Globus uses. This has been fixed by the UNICORE team in versions UNICORE 6.6.0 and greater.
    19. Configure any firewall changes so that Unicore services can talk to one another and users can get to the gateway. (either firewall changes, iptable changes or other)
    20. Configure any storage capabilities needed. See the Storage Configuration section below and the Unicorex manual for further details on this configuration.
    21. If you are configuring UNICORE to work with Genesis II perform the following steps:
        Download the XSEDE configuration package from http://software.xsede.org/production/genesis2/genesis2-v2.7/unicore-in-xsede-1.6.0-p3.zip
        Note:For GFFS stagings to work, the Genesis client must be installed somewhere on the TSI host (and that is the path to the Genesis II executable that setupForXSEDE.sh asks for). This is because transfers directly go to the TSI file system instead of being tunneled trough the UNICORE/X. The Genesis II client is not part of the unicore-in-xsede archive or any other artifact that we provide, it must be installed manually. Preferably, Genesis should be installed on a local file system, we experienced some problems when installing it on a network file system (on both NFS and Lustre). It is suggested to install the Genesis II client software prior to doing this step. If you know where you plan to install this you can continue on then install the Genesis II client software later.
      • Copy the zip file into the "unicorex" directory and cd into that folder.
        CAUTION: the configuration package will not work if the following steps are not executed from the "unicorex" folder!
      • Unzip the configuration package into the current folder. Run the .setupForXSEDE.sh shell script. It will ask you for the absolute path of the Genesis II client on your TSI host. The Genesis II client is necessary in order to allow users to stage files from the global federated file system (GFFS) into the working directories of UNICORE jobs. The .setupForXSEDE.sh shell script should confirm each successful step with a done message. The script does not do any harm if it is executed multiple times, it can in fact be used to modify the location of the Genesis II client directory later.
        The location of the Genesis II client directory can also be specified without invoking the script. This can be done by explicitly editing the /unicorex/conf/uas.config file with the following configuration property, genii.client.dir=.

      Part 3: Start Services

    22. On the non-tsi service host, start the Unicore services with INSTALL_PATH/{service}/bin/start.sh and check the appropriate log files.
    23. On the tsi host, start the tsi with INSTALL_PATH/TSI/bin/start_tsi and check the log files
    24. Check and resolve any errors. Most errors experienced here have been reported to be keystore and truststore related.

    Post Server Installation

    Client Software Installation

    The XSEDE Service Providers should install the server components mentioned above and the UNICORE command-line client tool. Versions 6.5.1 and 6.6.0 are both available and work with the UNICORE 6.6.0 server components.

    Install the client software on a login node of your compute node or appropriate node where users login by selecting an appropriate directory on your system where the software should be installed and untar the tarball. Put a module on your system for this client software named "unicore/6.6.0" and "unicore/6.5.1", respectively. The "myproxy-logon-unicore" command is dependent on the Globus myproxy-logon command to get a user credential so this is a dependency on the Globus Toolkit.

    End user and XSEDE staff can use the UNICORE Rich Client by installing it on their own system.

    Client Software Testing: Unicore Commandline Client (UCC) testing

    Unicore Rich Client testing

    Service Configuration

    See the appropriate Unicore service manual for configuration options for each service. http://www.unicore.eu/documentation/manuals/unicore6/

    Storage Configuration

    There are three types of Storages in Unicore: the default HOME which requires no configuration but can be turned off, a Target System storage and a Storage Factory Storage. Each of these are configured in the Unicorex (xnjs) service component configuration by modifying unicorex/conf/uas.config.

    Note: After the initial installation of Unicore, if a Storage is changed it is not sufficient to just edit the uas.config configuration file and restart the unicorex service to make a storage change. In addition, the files in the unicorex/data directory must all be deleted otherwise old Storage configuration will linger and results will be unpredictable. Therefore, be very careful when removing or adding a storage after the initial installation. It would be best to make sure there are no Unicore jobs in flight or running which could be affected by removing the contents of the unicorex/data directory. You have been warned. This will be communicated back to Unicore developers/documentors

    By default the HOME storage is already configured and can be turned off by setting

       uas.targetsystem.home.disable=true
    
    To configure a target system storage area such as /tmp, /scratch or /lustre/scratch follow the instructions in section 2.8 of the Unicore 6 Unicorex manual. An example is

    To configure /tmp

    coreServices.targetsystem.storage.1=TEMP
    coreServices.targetsystem.storage.1.type=FIXEDPATH
    coreServices.targetsystem.storage.1.path=/tmp
    coreServices.targetsystem.storage.1.protocols=UFTP BFT RBYTEIO
    

    To configure /lustre/scratch

    coreServices.targetsystem.storage.2=LUSTRESCRATCH
    coreServices.targetsystem.storage.2.type=VARIABLE
    coreServices.targetsystem.storage.2.path=/lustre/scratch/$USER
    coreServices.targetsystem.storage.2.protocols=UFTP BFT RBYTEIO
    
    To use the Storage Factory and set the DEFAULT add "de.fzj.unicore.uas.util.CreateSMSOnStartup" to the DefaultOnStartup line
    uas.onstartup=de.fzj.unicore.uas.util.DefaultOnStartup \
    de.fzj.unicore.cisprovider.impl.InitOnStartup \
    de.fzj.unicore.uas.util.CreateSMSOnStartup
    
    and check these values also. See section 2.9 in the Unicore 6 Unicorex manual.
    uas.storagefactory.types=DEFAULT
    uas.storagefactory.DEFAULT.description=Lustre Scratch
    uas.storagefactory.DEFAULT.path=/lustre/scratch/unicore
    uas.storagefactory.DEFAULT.cleanup=true
    defaultsms.workdir=/lustre/scratch/unicore
    

    Project Configuration

    In order to allow users to specify a project to charge to, instead of charging to their default project, add the following config to the unicorex/conf/simpleidb file, in the targetSystemProperties section, add:
            <!-- Project to charge the job to -->
            <idb:Resource xmlns:idb="http://www.fz-juelich.de/unicore/xnjs/idb">
                <idb:Name>Project</idb:Name>
                <idb:Type>string</idb:Type>
                <idb:Description>The project to charge  the job to</idb:Description>
                <idb:Default></idb:Default>
            </idb:Resource>
    

    Queues Configuration

    In order to allow users to specify a particular queue to submit a job to, add the following config to vi unicorex/conf/simpleidb file. In the targetSystemProperties section, add the following lines.
    Note: Please add appropriate queue names as available on your resource for allowed values. The values shown here serve as an example.
            <!-- queues -->
            <idb:Resource xmlns:idb="http://www.fz-juelich.de/unicore/xnjs/idb">
                <idb:Name>Queue</idb:Name>
                <idb:Type>choice</idb:Type>
                <idb:Description>The queues available on Darter's batch system</idb:Description>
                <idb:Default>NONE</idb:Default>
                <idb:AllowedValue>NONE</idb:AllowedValue>
                <idb:AllowedValue>batch</idb:AllowedValue>
                <idb:AllowedValue>login<</idb:AllowedValue>
            </idb:Resource>
    

    GridFTP Configuration

    Unicore 6.6.0 comes with in built support for GridFTP transfers. To configure this for your installation please complete the following steps.
    Note: {GLOBUS_LOCATION} refers to the location of the globus client on your install machine.

    1. In the unicorex/conf/simpleidb file, include the following lines
              <!-- Location of globus-url-copy executable -->
              <idb:Property name="globus-url-copy" value="{GLOBUS_LOCATION}/bin/globus-url-copy"/>
          
    2. In the unicorex/conf/xnjs_legacy.xml file, include the following lines within the <e;eng:Properties>e; section (the -cd option is to allow gridftp to create missing directories while transferring files):
          <eng:Property name="globus-url-copy" value="{GLOBUS_LOCATION}/bin/globus-url-copy"/>
          <!-- additional parameters for globus-url-copy -->
          <eng:Property name="globus-url-copy.parameters" value="-cd"/>
          
    3. Next, you need to modify the XNJS configuration to enable a component that stores the proxy in the format expected by GSI (no encryption, PEM format). Add the "<eng:Processor>e;de.fzj.unicore.uas.xnjs.ProxyCertToUspaceProcessor<e;/eng:Processor>e;" line in the ProcessingChain section as shown below:
        <eng:ProcessingChain actionType="JSDL" jobDescriptionType="{http://schemas.ggf.org/jsdl/2005/11/jsdl}JobDefinition">
          <!-- stores proxy to uspace -->
          <eng:Processor>e;de.fzj.unicore.uas.xnjs.ProxyCertToUspaceProcessor<e;/eng:Processor>
          <!-- usual entries -->
          <eng:Processor>e;de.fzj.unicore.xnjs.jsdl.JSDLProcessor<e;/eng:Processor>
          <eng:Processor>e;de.fzj.unicore.xnjs.ems.processors.UsageLogger<e;/eng:Processor>
        </eng:ProcessingChain>
      
    4. In the unicorex/conf/wsrflite.xml file, you need to enable a handler on the web services engine. Edit the TargetSystemService section to add the line " <handler type="in" class="de.fzj.unicore.uas.security.ProxyCertInHandler"/>" as shown below:
        <service name="TargetSystemService" wsrf="true" persistent="true">
           <interface class="de.fzj.unicore.uas.TargetSystem"/>
           <implementation class="de.fzj.unicore.uas.impl.tss.TargetSystemHomeImpl"/>
           <!-- additional proxy extraction handler definition -->
           <handler type="in" class="de.fzj.unicore.uas.security.ProxyCertInHandler"/>
         </service>
      
    5. Also define the same for other services such as the BES in the same unicorex/conf/wsrflite.xml file. Add the " <handler type="in" class="de.fzj.unicore.uas.security.ProxyCertInHandler"/> " as shown below:
        <service name="BESFactory" wsrf="true" persistent="true">
             <interface class="de.fzj.unicore.bes.BESFactory"/>
             <implementation class="de.fzj.unicore.bes.impl.factory.BESFactoryHomeImpl"/>
             <handler type="in" class="de.fzj.unicore.uas.security.ProxyCertInHandler"/>
        </service>
      

    MPI Exection Environment Configuration

    UNICORE does not execute MPI jobs natively. You will need to configure an execution environment http://unicore-dev.zam.kfa-juelich.de/documentation/unicorex-6.4.0/unicorex-manual.html#_execution_environments to provide an MPI execution environment for users. In the execution environment, you can specify the command to place the job on a compute node, for eg aprun, mpiexec or mpirun and also the available options available with that command. Execution environments are defined the in config file, unicorex/conf/simpleidb. For example, Darter's execution environment is defined as follows in the unicorex/conf/simpleidb file for Darter:
    <ee:ExecutionEnvironment xmlns:ee="http://www.unicore.eu/unicore/jsdl-extensions">
       <ee:Name>MPI</ee:Name>
          <ee:Version>1.0</ee:Version>
          <ee:Description>Execution Environment for MPI Applications</ee:Description>
          <ee:ExecutableName>/sw/xc30/altd/1.0/sles11.2/bin/aprun</ee:ExecutableName>
          <ee:CommandlineTemplate>#EXECUTABLE #ARGS #USERCOMMAND #USERARGS</ee:CommandlineTemplate>
          <ee:Argument>
            <ee:Name>NumberOfProcesses</ee:Name>
            <ee:IncarnatedValue>-n </ee:IncarnatedValue>
            <ee:ArgumentMetadata>
                <ee:Type>int</ee:Type>
                <ee:Description>The number of processes to start</ee:Description>
            </ee:ArgumentMetadata>
            </ee:Argument>
            <ee:Argument>
                <ee:Name>ProcessesPerHost</ee:Name>
                <ee:IncarnatedValue>-N</ee:IncarnatedValue>
                <ee:ArgumentMetadata>
                    <ee:Type>int</ee:Type>
                    <ee:Description>The number of processes per host</ee:Description>
                </ee:ArgumentMetadata>
                </ee:Argument>
                <ee:Argument>
                    <ee:Name>ProcessesPerSocket</ee:Name>
                    <ee:IncarnatedValue>-S </ee:IncarnatedValue>
                    <ee:ArgumentMetadata>
                        <ee:Type>int</ee:Type>
                        <ee:Description>Number of MPI processes per socket </ee:Description>
                    </ee:ArgumentMetadata>
                </ee:Argument>
                <ee:Argument>
                    <ee:Name>CoresPerMPIProcess</ee:Name>
                    <ee:IncarnatedValue>-d </ee:IncarnatedValue>
                    <ee:ArgumentMetadata>
                        <ee:Type>int</ee:Type>
                        <ee:Description> number of cores per MPI process-use with OpenMP</ee:Description>
                    </ee:ArgumentMetadata>
                </ee:Argument>
                <ee:PreCommand>
                    <ee:Name>compile</ee:Name>
                    <ee:IncarnatedValue>if [ "$C_SOURCES" != "" ]; then cc "$C_SOURCES"; sleep 30;fi</ee:IncarnatedValue>
                    <ee:EnabledByDefault>true</ee:EnabledByDefault>
                </ee:PreCommand>
                <ee:PreCommand>
                    <ee:Name>cdpbsworkdir</ee:Name>
                    <ee:IncarnatedValue>cd $PBS_O_WORKDIR</ee:IncarnatedValue>
                    <ee:EnabledByDefault>true</ee:EnabledByDefault>
                </ee:PreCommand>
    </ee:ExecutionEnvironment>
    
    
    Insert a suitably modified version of this script just before the TargetSystemProperties section.

    Unicore Logging

    There are one or more log files for the Unicore services. Each log file is usually located in {servicename}/logs. Examples are:

    Logging configuration can be changed by modifying the logging.properties file for the service. See the contents of the file and the respective service manual for the logging properties documentation:
    [unicore@narwhal unicore-6.5.1]$ ls -l */conf/*logging.properties
    -rw-r--r-- 1 unicore unicore 1173 Jun  3 08:47 gateway/conf/logging.properties
    -rw-r--r-- 1 unicore unicore 1186 Jun  3 08:47 registry/conf/logging.properties
    -rw-r--r-- 1 unicore unicore 1270 Jun  3 08:47 unicorex/conf/logging.properties
    -rw-r--r-- 1 unicore unicore  774 Jun  3 08:47 unicorex/conf/ucc.logging.properties
    -rw-r--r-- 1 unicore unicore  333 Jun  3 08:47 xuudb/conf/client_logging.properties
    -rw-r--r-- 1 unicore unicore  552 Jun  3 08:47 xuudb/conf/logging.properties
    

    Deployment Testing

    The following describes tests that can be run after successful installation to check that the Unicore installation is complete and that a minimum set of functionality is available.

    Need to point people to the Testing Guide

    Registering in Information Services

    Your new UNICORE services should be registered in your Execution Management Service Capability Kit , as documented here.