kiz
- 1:
Library. - 2:
Information Technology.- 2.1:
Online-Statusabfrage. - 2.2:
Network. - 2.3:
Sicherheit & Zertifikate. - 2.4:
Communication Services. - 2.5:
Campus Systems. - 2.6:
Processor & Compute-Server.- 2.6.1:
CUSS Compute-Server. - 2.6.2:
bwGRiD Cluster Ulm . - 2.6.3:
Unix server. - 2.6.4:
Computer and Course Labs. - 2.6.5:
Reading Room Computers.
- 2.6.1:
- 2.7:
Software & Operating Systems. - 2.8:
Data Management. - 2.9:
Dienste für die Verwaltung.
- 2.1:
- 3:
Media. 4: - 5:
Accounts / Logins / Downloads. - 6:
Forms. - 7:
Workshops & Events. 8: - 9:
Who We Are. - 10:
The kiz from A to Z.
bwGRiD Cluster Ulm
The
Baden-Württemberg Grid (bwGRiD) as a member of the
D-Grid initiative provides more than 11.000 cores for research and science throughout Baden-Württemberg. Contributing partners are the Universities at
Freiburg,
Heidelberg,
Tübingen,
Mannheim,
Esslingen and the union
Ulm/
Konstanz, as well as the
Karlsruhe Institute of Technology and the
High Performance Computing Center Stuttgart (HLRS).
You can find the corresponding bwGRiD-specific web pages of our partners here:
Stuttgart,
Tübingen,
Heidelberg,
Mannheim.
Hardware and Network-Topology at Ulm
The resources of Ulm and Konstanz are jointly administered at Ulm. This allows to offer 20 IBM Bladecenter Type H with 14 IBM HS21XM Blades each, summing up to 280 Blades to users in Ulm and Konstanz. Every blade comprehends two 4-Core Intel Harpertown CPUs with 2.83 GHz and 16 GByte RAM.
Maintenance and login are performed utilizing Gigabit-Ethernet, where each blade-center is connected to the local backbone by means of one gigabit ethernet link only. Hence 14 Blades are obliged to share the same Gigabit Ethernet connection.
Each blade has an additional DDR4x InfiniBand (IB) Interface (Transfer rate: 20 GBit/Sec). The IB-links are connected to a common
288-Port Grid Director ISR 2012 InfiniBand-Switch and thus enable rapid communication between solitude blades. The data exchange of parallel programs via
Message-Passing-Interface (MPI) Standard as well as communication with the associated parallel Lustre File system from Hewlett Packard (sumarized transfer rate 1.8 GByte/sec, overall capacity 64 TByte) is carried out over this IB-Interface.
Since March 2009 every blade exhibits local hard disks. This allows applications to utilize a 16 GB swap as well as a 120 GB /tmp area on each node. Especially applications in the chemistry domain gain significant benefits from this local hard disks, due to improved latency on the one hand and the independence of local storage from the workload of the overall grid on the other. For an ideal application able to perfectly utilize the local hard disk an overall system bandwidth of 20 GByte/sec is theoretically accomplishable.
Login to bwGRiD
The Clusters contributing to the bwGRiD are situated at seven different sites and each site uses its own name/id space. This means, that a user having an account and corresponding username at Ulm University does not have an account at the University of Stuttgart and the username he has at Ulm may be already assigned to somebody else in Stuttgart. But as the name bwGRiD implies every system should be accessible by users from each of the Universities at Baden-Württemberg. So an approach had to be determined that allows users to log into the grid computers at different sites using a unified authentication method. This functionality is provided by Globus, a so called Grid-middleware employing DFN-Grid certificates. When somebody tries to log into a system -- regardless of its location -- his identity may be verified utilizing his DFN-Grid certificate and a local username will automatically be assigned. The mapping between certificate and username is managed by the Globus system.
This approach offers some advantages:
With the Globus system you may easily change and select your preferred site according to your soft and hardware needs. If for example the queue at one side is jammed and the queue at another site offers free capacity you can simply switch between those two sites. Or you need a software package that is only provided in Ulm. With your certificate you can seamlessly log into the system at Ulm and use the environment there.
Independent of Globus some universities offer a simplified direct login for local users. This approach does not offer the advantage of site interchangeability and hence we strongly suggest you to use the certificate based method.
Ulm University also offers two methods to log into its part of the bwGRiD cluster:
After you managed to fullfill all
prerequisites for certificate based authentication you may log into the Globus frontend at Ulm (koios.rz.uni-ulm.de), Stuttgart (gt4bw.dgrid.hlrs.de), Mannheim (gtbw.grid.uni-mannheim.de) or Tübingen (gt4.uni-tuebingen.de) with the following Globus commands (available at the
kiz' Unix-Servers or the workstations at the
Linux Pools)module load system/globus;
grid-proxy-init -debug -verify;
gsissh -x -p 2222 koios.rz.uni-ulm.deIt is also possible to install the
Globus-Toolkit at your own computer (gsissh is a component of this toolkit) or to employ the java based
GSISSH-Term. In both cases you first have to take the steps needed to
enable a certificate based login with your computer. Properly configuring an own Globus installation is not a straightforward task and we are not able to offer support in this context. If you are facing trouble during the installation and configuration process you should prefer using our kiz computers, that offer a handy pre-configured installation of the Globus system.If you prefer a direct login you need to have a
kiz Unix-Account. With this account you can log into the interactive node themis the same way you log into the normal kiz-severs:ssh -l Account-Name themis.rz.uni-ulm.de
Further reading
When you log into one of the frontend nodes in Ulm an extensive documentation introducing the cluster, the queueing system, the directory layout, the data management and the software module system is displayed automatically. We recommend you to thoroughly read this documentation. It is also possible to read this document later simply by typing "man bw-grid" (available on themis and koios only).
Details on the installed software, the queuing system and on how to use the cluster can be accessed after login to our cluster. You will not find that information on the web.
Please visit the
bwGRiD software page for a list of software packages installed at the individual bwGRiD locations. The commerical software packages are usually restricted to special user groups.
News concerning bwGRiD Cluster Ulm
Technical news are also displayed automatically when you log into themis or koios. This covers for example anouncements of reboots, changes in the software setup or other system settings. Please read these login messages. There is also a mailing list that can be found at
https://imap.uni-ulm.de/lists/info/kiz.wiss-software.
.
Help Desk
Mon - Fri 8 a.m. - 6 p.m.
+49 (0)731/50-30000
helpdesk(at)uni-ulm.de
Help Desk support form
[more]
Service Points kiz
Contact bwGRiD Ulm
With questions concerning the bwGRiD please contact our
Helpdesk or send an email to
dgrid-admin(at)uni-ulm.de.
bwGRiD Ulm online state
The online status of bwGRiD Ulm is displayed here (see below) if login to themis.rz.uni-ulm.de and koios.rz.uni-ulm.de is NOT possible.
On the other hand if the login is possible the detailed status is printed during login to one of those nodes.
In addition software and major administration news are published on our
scientific software mailing list. For mail history you can browse the archive of the mailing list.
Current status of bwGRiD Ulm (last update 2013-04-04):
Normal operation. Themis and koios should be reachable. Please login to one of the nodes to read latest news.


