1. You can follow the setup to complete installation.
2. credd. (reference condor mannual 6.2 Microsoft Windows)
2.1 Central Manager (credd host)
The condor_config.local is null by default. You need add the following information.
copy etc/condor_config.local.credd to condor_config.local, do some modification about the key info, such as the following, don't forget set start=true, or you will obtain the follwing error message after submiting jobs.
CONDOR_HOST = 192.168.1.101
## When is this machine willing to start a job?
START = TRUE
## When to suspend a job?
SUSPEND = FALSE
## When to nicely stop a job?
## (as opposed to killing it instantaneously)
PREEMPT = FALSE
## When to instantaneously kill a preempting job
## (e.g. if a job is in the pre-empting stage for too long)
KILL = FALSE
######################################################################
##
## condor_config.credd
##
## This is the default local configuration file for the machine
## running the condor_credd. You should copy this file to the
## appropriate location and customize it for your needs.
##
######################################################################
## Note: The following settings will need to be present in your
## global config file:
CREDD_HOST = $(CONDOR_HOST)
STARTER_ALLOW_RUNAS_OWNER = True
CREDD_CACHE_LOCALLY = True
## You'll also need to ensure that clients are configured to use
## PASSWORD authentication on any machine that can run jobs as the
## submitting user. For example,
SEC_CLIENT_AUTHENTICATION_METHODS = NTSSPI, PASSWORD
## And finally, you'll need to enable CONFIG-level access for all
## machines in the pool so that the pool password can be stored:
ALLOW_CONFIG = Administrator@*
SEC_CONFIG_NEGOTIATION = REQUIRED
SEC_CONFIG_AUTHENTICATION = REQUIRED
SEC_CONFIG_ENCRYPTION = REQUIRED
SEC_CONFIG_INTEGRITY = REQUIRED
## See the "Executing Jobs as the Submitting User" section of the
## Condor manual for further details.
## CREDD_SETTINGS
## CREDD logging settings
## Customize these if you wish.
CREDD_LOG = $(LOG)/CreddLog
CREDD_DEBUG = D_COMMAND
MAX_CREDD_LOG = 50000000
#################################################
## CREDD Expert settings
## Everyting below is for the UBER-KNOWLEDGEABLE only!
## Do not change these unless you know what you do!
#################################################
DAEMON_LIST = $(DAEMON_LIST), CREDD, STARTD
#DC_DAEMON_LIST = /
#MASTER, STARTD, SCHEDD, KBDD, COLLECTOR, NEGOTIATOR, EVENTD, /
#VIEW_SERVER, CONDOR_VIEW, VIEW_COLLECTOR, HAWKEYE, CREDD, HAD, /
#QUILL
CREDD = $(SBIN)/condor_credd.exe
# Timeout session quickly since we normally only get contacted
# once per starter
SEC_CREDD_SESSION_TIMEOUT = 10
# Set security settings so that full security to the credd is required
CREDD.SEC_DEFAULT_AUTHENTICATION =REQUIRED
CREDD.SEC_DEFAULT_ENCRYPTION = REQUIRED
CREDD.SEC_DEFAULT_INTEGRITY = REQUIRED
CREDD.SEC_DEFAULT_NEGOTIATION = REQUIRED
# Require PASSWORD auth for password fetching
CREDD.SEC_DAEMON_AUTHENTICATION_METHODS = PASSWORD
# Only honor password fetch requests to the trusted "condor_pool" user
CREDD.ALLOW_DAEMON = condor_pool@$(UID_DOMAIN)
# Require NTSSPI for storing credentials
CREDD.SEC_DEFAULT_AUTHENTICATION_METHODS = NTSSPI
2.2 Working nodes
THe following is the condor_config.local
CREDD_HOST = $(CONDOR_HOST)
STARTER_ALLOW_RUNAS_OWNER = True
CREDD_CACHE_LOCALLY = True
## You'll also need to ensure that clients are configured to use
## PASSWORD authentication on any machine that can run jobs as the
## submitting user. For example,
SEC_CLIENT_AUTHENTICATION_METHODS = NTSSPI, PASSWORD
## And finally, you'll need to enable CONFIG-level access for all
## machines in the pool so that the pool password can be stored:
ALLOW_CONFIG = Administrator@*
SEC_CONFIG_NEGOTIATION = REQUIRED
SEC_CONFIG_AUTHENTICATION = REQUIRED
SEC_CONFIG_ENCRYPTION = REQUIRED
SEC_CONFIG_INTEGRITY = REQUIRED
2.3 Conodr_credd_store
The configuration employed here relies on the PASSWORD
authentication method to facilitate secure communication between execute machines and the condor_credd. In order for PASSWORD
authenticated communication to work, a ``pool password'' must be chosen and distributed. Once a pool password is decided upon, it must be stored identically on each machine. The pool password first should be stored on the condor_credd host, then the other machines in the pool.
To store the pool password on a given machine, run condor_store_cred -c add when logged in with the administrative account on that machine, and enter the password when prompted. If the administrative account is shared across all machines (i.e. if it is a domain account or has the same password on all machines), logging in separately to each machine in the pool can be avoided. Instead, the pool password can be securely pushed out to each machine using commands of the form condor_store_cred -c -n exec01.cs.wisc.edu add.
I failed in executing condor_store_cred -c -n exec01.cs.wisc.edu add in crdd host, so I execute condor_store_cred -c add on each working nodes instead. It works. Others met the problem, and following is the answer:
You'll need to start a schedd deamon on your execute node, run the credential store command, and then shut down the schedd. It's a necessity of storing credentials. It only has to be running while you're storing credentials
I am very anxious that I input different passwd on central manager and working nodes, but it sitll works.