Internal Maintenance Version
Advisor: Prof. Prasant Mohapatra
For updated info or reporting issues, please go to https://github.com/dtczhl/PM-Server
Access Docker Environment
IP address: http://220.127.116.11:8000
Username: FirstnameLastname (Change your password ASAP)
- Please remove unused docker images and containers to save space.
- Please do NOT use the server to store your data. We have a data station to save your data there.
- Huanle Zhang: 50000 - 50999
- Abhishek Roy: 51000 - 51999
- Anshuman Chhabra: 52000 - 52999
- Debraj Basu: 53000 - 53999
- Hao Fu: 54000 - 54999
- Tianbo Gu: 55000 - 55999
- Zheng Fang: 56000 - 56999
- Muchen Wu: 57000 - 57999
- Jiuming Chen: 58000 - 58999
This tutorial shows how to run Tensorflow with GPU support.
Type in Name, Image (e.g., tensorflow/tensorflow:1.12.0-gpu-py3), enable
Publish all exposed ports, Port mapping 22 for ssh, 8888 for Jupyter, etc. For easy management, I have allocated a port range for each one before. Below is an example
Advanced container settings:
Command & logging, choose Console
Interactive & TTY (-i -t)
Runtime & Resources. Choose
Deploy the container. Your container is running now.
To enable SSH login into your container. Click Console of your container, and connects. Type in the following commands to install SSH
apt-get update apt-get install ssh service ssh restart
You need to create a non-root user for SSH
Connect to your container (you can regard it as your own standalone computer)
ssh USERNAME@IP -p Port
Tensorflow GPU Compatibility
If you encounter unexpected errors such as "not enough shared memory", contact the administrator. Once you get a normal user account of the workstation (not Portainer account) from the administrator, you can then ssh into the workstation and use docker command options (e.g.
--shm-size). The following command provides a Tensorflow code example, you can use it check whether your environment is good.
docker run --runtime=nvidia -it --shm-size=256m --rm tensorflow/tensorflow:1.12.0-gpu-py3 python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))"
Just replace the content after -c of the above command to your code.
The python command 'nvdia-smi' does not work properly in Docker environment. See here.
Run Basic Programs (Not Recommended)
Deprecated! SFTP is disabled and Volumes are deleted
This tutorial will show you how to use the server.
app.pyis the program as shown below. It prints out some messages, and write a string to a file named
requirements.txtspecifies external packages your program is dependent on. Since we only use builtin
ospackages in this tutorial, this file is empty.
Dockerfilespecifies how to run your program.
Build docker image. Under this tutorial folder, run
docker build --tag=my_tutorial .
--tagis your docker image name
After executing the previous command, the docker image has been installed in you host machine already. To show your docker images, run
docker image ls
You can see
my_tutorialdocker image as shown below
my_tutorialdocker as an archive file so that you can upload to the server. Run
docker save -o my_tutorial.tar my_tutorial
-ospecifies the output path/filename.
my_tutorialis the docker name
Upload to server. Login into the server, under
Add container, fill in Name, Image, scroll down to the Volumes tag, click map additional volume, type in
/data(because we save a file to
/data/tutorial.txt), select your volume named FirstnameLastname (everyone is pre-allocated with one)
Deploy the container. Your docker image is running automatically. It stops after it finish running the program.
Quick actions, you can see the output from
/data/tutorial.txt, login in using any SFTP client you like (I'm using FileZilla in Ubuntu). Type in the IP address (same as the server, but without the port number), username (FirstnameLastname), your password, and port 22. You can find the
tutorial.txtfile under your