Introduction to Condor for users » History » Version 2
Version 1 (Miguel Dias Costa, 29/02/2012 15:09) → Version 2/4 (Miguel Dias Costa, 29/02/2012 15:15)
h1. Introduction to Condor for users
Condor is a job submission system that creates a high throughput computing environments.
h2. Getting started
After gaining ssh accesh to a condor frontend, a user can see what resources are available by running
<code>
condor_status
</code>
and check the status of the queue by running
<code>
condor_q
</code>
In order to submit jobs, one needs to create a submit script that defines the requirements and a few options. A simple example to get started would be
h3. Example job submission script helloworld.submit
<code>
executable = helloworld.sh
universe = vanilla
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
stream_output = true
transfer_input_files = helloworld.dat
request_cpus = 1
request_memory = 8000
requirements = (target.Arch == "X86_64")
input = /dev/null
output = out
error = error
log = log
queue
</code>
h3. Example additional file helloworld.dat
<code>
Hello World!
</code>
h3. Example executable script helloworld.sh
<code>
#!/bin/bash
echo "----------------------"
hostname
echo "----------------------"
date
echo "----------------------"
echo "Sleeping 20s"
sleep 20
echo "----------------------"
cat helloworld.dat
</code>
After creating these 3 files, the job would be submitted by running
condor_submit helloworld.submit
h2. Checking status
To see details about the status of each job in the queue, one would run
<code>
condor_q -b
</code>
When a job is running, it is also possible to access the node it is using, by identifying the jobid with condor_q and then running
<code>
condor_ssh_to_job jobid
</code>
h2. More Information
http://research.cs.wisc.edu/condor/manual/v7.4/ref.html
Condor is a job submission system that creates a high throughput computing environments.
h2. Getting started
After gaining ssh accesh to a condor frontend, a user can see what resources are available by running
<code>
condor_status
</code>
and check the status of the queue by running
<code>
condor_q
</code>
In order to submit jobs, one needs to create a submit script that defines the requirements and a few options. A simple example to get started would be
h3. Example job submission script helloworld.submit
<code>
executable = helloworld.sh
universe = vanilla
should_transfer_files = YES
when_to_transfer_output = ON_EXIT
stream_output = true
transfer_input_files = helloworld.dat
request_cpus = 1
request_memory = 8000
requirements = (target.Arch == "X86_64")
input = /dev/null
output = out
error = error
log = log
queue
</code>
h3. Example additional file helloworld.dat
<code>
Hello World!
</code>
h3. Example executable script helloworld.sh
<code>
#!/bin/bash
echo "----------------------"
hostname
echo "----------------------"
date
echo "----------------------"
echo "Sleeping 20s"
sleep 20
echo "----------------------"
cat helloworld.dat
</code>
After creating these 3 files, the job would be submitted by running
condor_submit helloworld.submit
h2. Checking status
To see details about the status of each job in the queue, one would run
<code>
condor_q -b
</code>
When a job is running, it is also possible to access the node it is using, by identifying the jobid with condor_q and then running
<code>
condor_ssh_to_job jobid
</code>
h2. More Information
http://research.cs.wisc.edu/condor/manual/v7.4/ref.html