[GE users] SSH and host keys

prentice prentice at ias.edu
Fri Feb 13 16:23:05 GMT 2009

That's exactly what I did. I wrote a shell script that will update
/etc/ssh/ssh_known_hosts, so after re-installing a node(s), I just run
that command, and all key issues are solved. Since I never ssh from node
to node, I don't bother updating /etc/ssh/ssh_known_hosts on the nodes,
although that can easily be done with tools like tentakel.

I find that for systems like cluster nodes, it's easier to just update
that file than it is to worry about preserving ssh keys through node

Let me save you some time. Here's the script I use. I have all the
cluster nodes in /etc/hosts, and I only care about IP addresses that
begin with 172 or 192 (master node is on multiple nets, of course) The
second ssh-keyscan command is to deal with a weird bug with how the host
"master" is handled (probably b/c it's on two nets)

$ more /opt/sbin/update_ssh_keys

if [ -f ${tmpfile} ]; then
cat <<EOF

ERROR: ${tmpfile} already exists!
Someone else may already be running this command,
or a previous attempt died. Please make sure no
one else is running this command, remove ${tmpfile},
and try again.

exit 1

egrep ^"172|192" /etc/hosts | tr -s " " | awk '{ print $1"\n"$2"\n"$3 }'
> /tmp/ssh_hosts

ssh-keyscan -t rsa,dsa -f /tmp/ssh_hosts | sort -n >
ssh-keyscan -t rsa,dsa master | sort -n >> /etc/ssh/ssh_known_hosts

rm /tmp/ssh_hosts

mhanby wrote:
> Thanks, I'd not used the ssh-keyscan utility, I may have to roll up an
> automated task to do this in case a node gets reinstalled or a new node
> is added.

> prentice wrote:
>> opoplawski wrote:
>>> Anyone know how to avoid the following:
>>> $ qrsh -l idllic=1
>>> The authenticity of host '[apollo.cora.nwra.com]:42115 
>>> ([]:42115)' can't be established.
>>> RSA key fingerprint is
> be:14:c6:b3:7e:23:48:57:71:c2:02:75:74:7e:f4:ec.
>>> Are you sure you want to continue connecting (yes/no)?
>>> every time qrsh is run and the massive population of host keys it
> causes?
>> Use ssh-keyscan to pre-populate the key database on each system:
>> ssh-keyscan -t rsa,dsa -f /tmp/ssh_hosts | sort -n >
>> /etc/ssh/ssh_known_hosts
>> rm /tmp/ssh_hosts
>> There's a man page for ssh-keyscan that can give you the full details.
> I
>> wrote a script to automate this on my cluster so that when I
> re-install
>> a new node, I don't need to worry about preserving the node's ssh keys
> -
>> I just update /etc/ssh/ssh_known_hosts by running the script after the
>> install.
> This should be obvious but, I'll mention it anyway: /tmp/ssh_hosts is a
> file you create yourself in advance containing the names of the hosts
> you want in your known_hosts file.



To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list