[GE users] qrsh via sshd : Did not receive identification string from x.x.x.x

reuti reuti at staff.uni-marburg.de
Mon Sep 7 19:56:16 BST 2009


Am 07.09.2009 um 20:05 schrieb l_heck:

> I made some progress, but it is a bit of a mess:
>
> Firstly I had changed the host configuration
>
> qconf -mconf titania
>
> mailer                       /bin/mailx
> xterm                        /usr/bin/xterm
> qlogin_daemon                /usr/sbin/sshd  -i

You setup qlogin to the helping wrapper: http:// 
gridengine.sunsource.net/howto/qrsh_qlogin_ssh.html ?

*_daemon will be used at the target machine

> rsh_command                  /usr/bin/ssh
> rsh_daemon                   /usr/sbin/sshd  -i
> rlogin_command               /usr/bin/ssh
> rlogin_daemon                /usr/sbin/sshd -i
>
> that produced
> Sep  7 17:03:34 titania sshd[21015]: pam_unix(sshd:session):  
> session opened for
> user dph0elh by (uid=0)
> Sep  7 17:03:34 titania sshd[21015]: pam_selinux(sshd:session):  
> conversation
> failed
> Sep  7 17:03:34 titania sshd[21015]: pam_selinux(sshd:session): No  
> response to
> query: Would you like to enter a security context? [N]
> Sep  7 17:03:34 titania sshd[21015]: pam_selinux(sshd:session):  
> Unable to get
> valid context for dph0elh
>
> Then I spoke to a colleague who has set this up successfully and he  
> told me that
> I had to modify the global config
>
> m71-root (1075)>qconf -sconf
> global:
> execd_spool_dir              /var/sge/spool
> mailer                       /bin/mailx
> xterm                        /usr/bin/xterm
> load_sensor                  none
> prolog                       none
> epilog                       none
> shell_start_mode             posix_compliant
> login_shells                 sh,ksh,csh,tcsh
> min_uid                      0
> min_gid                      0
> user_lists                   none
> xuser_lists                  none
> projects                     none
> xprojects                    none
> enforce_project              false
> enforce_user                 auto
> load_report_time             00:00:40
> max_unheard                  00:05:00
> reschedule_unknown           00:00:00
> loglevel                     log_warning
> administrator_mail           none
> set_token_cmd                none
> pag_cmd                      none
> token_extend_time            none
> shepherd_cmd                 none
> qmaster_params               none
> execd_params                 USE_QSUB_GID=true
> reporting_params             accounting=true reporting=false \
>                              flush_time=00:00:15 joblog=false  
> sharelog=00:00:00
> finished_jobs                100
> gid_range                    50000-52000
> qlogin_command               telnet
> qlogin_daemon                /usr/sbin/in.telnetd
> rlogin_daemon                /usr/sbin/sshd -i
> rlogin_command               /usr/bin/ssh -X

*_command will be used on the issuing machine

> rsh_daemon                   /usr/sbin/sshd -i
> rsh_command                  /usr/bin/ssh
> max_aj_instances             2000
> max_aj_tasks                 75000
> max_u_jobs                   0
> max_jobs                     0
> auto_user_oticket            0
> auto_user_fshare             0
> auto_user_default_project    none
> auto_user_delete_time        86400
> delegated_file_staging       false
> reprioritize                 0
>
> to include ssh instructions and i had to make sure that the default  
> queue was
> set correctly.
>
> So I did this and then I got a different error
> Sep  7 18:08:04 titania sshd[21699]: error: PAM: pam_open_session():
> Authentication failure
> Sep  7 18:08:04 titania sshd[21699]: error: ssh_selinux_setup_pty:
> security_compute_relabel: Invalid argument
>
> and could still not login - the response also depended on from  
> where I submitted
> the request.

As there is nothing like a cluster-host (where you could set this up  
for each type of hosts you have), you have to do it for each host.

Also the versions of ssh must much, means all must be of the same  
type, otherwise you can only connect to the same type of machine  
which is the same like the machine where you issue the qrsh command.  
I saw problems where one machine was running True64 while others were  
Linux, and AFAIR they don't use openssh in True64, but another kind  
of ssh implementation. I don't know, what Solaris is using.

Up to now I never used SELinux, but several issues were solved on  
this list by disabling it :-/

-- Reuti

> Then I found that sshd does not live in the same place on solaris  
> x64 (most of
> my nodes) and linux - the node which I wanted to make interactive.
>
> Currently the muddle is complete and I have to start from scratch.
>
> Lydia
>
>
>
>
>
> On Mon, 7 Sep 2009, reuti wrote:
>
>> Am 07.09.2009 um 18:21 schrieb l_heck:
>>
>>> I followed the instructions on how to use qrsh over sshd and it
>>> fails with
>>> the login machine reporting
>>
>> Also ssh is using a random port in this setup. Do you have any
>> firewall in place?
>>
>> There was also a discussion some time ago how to setup hostbased
>> authentication w/o the necessity for the users to setup an ssh key w/
>> o passphrase. I can forward the stuff if you like in PM.
>>
>> -- Reuti
>>
>>
>>>
>>> Did not receive identification string from x.x.x.x (where the
>>> x.x.x.x is the ip
>>> number of the ssh-ing system)
>>>
>>> If I try to ssh straight in from the ssh-ing system to the
>>> interactive qrsh host
>>> I get in without a password (i set up authorized-keys).
>>>
>>> Any idea?
>>>
>>> Lydia
>>>
>>>
>>> ------------------------------------------
>>> Dr E L  Heck
>>>
>>> University of Durham
>>> Institute for Computational Cosmology
>>> Ogden Centre
>>> Department of Physics
>>> South Road
>>>
>>> DURHAM, DH1 3LE
>>> United Kingdom
>>>
>>> e-mail: lydia.heck at durham.ac.uk
>>>
>>> Tel.: + 44 191 - 334 3628
>>> Fax.: + 44 191 - 334 3645
>>> ___________________________________________
>>>
>>> ------------------------------------------------------
>>> http://gridengine.sunsource.net/ds/viewMessage.do?
>>> dsForumId=38&dsMessageId=216290
>>>
>>> To unsubscribe from this discussion, e-mail: [users-
>>> unsubscribe at gridengine.sunsource.net].
>>
>> ------------------------------------------------------
>> http://gridengine.sunsource.net/ds/viewMessage.do? 
>> dsForumId=38&dsMessageId=216301
>>
>> To unsubscribe from this discussion, e-mail: [users- 
>> unsubscribe at gridengine.sunsource.net].
>>
>
> ------------------------------------------
> Dr E L  Heck
>
> University of Durham
> Institute for Computational Cosmology
> Ogden Centre
> Department of Physics
> South Road
>
> DURHAM, DH1 3LE
> United Kingdom
>
> e-mail: lydia.heck at durham.ac.uk
>
> Tel.: + 44 191 - 334 3628
> Fax.: + 44 191 - 334 3645
> ___________________________________________
>
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do? 
> dsForumId=38&dsMessageId=216306
>
> To unsubscribe from this discussion, e-mail: [users- 
> unsubscribe at gridengine.sunsource.net].

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=216314

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list