WIEN2k是用密度泛函理论计算固体的电子结构的商业收费软件。它基于键结构计算最准确的方案——完全势能(线性)增广平面波((L)APW)+局域轨道(lo)方法。在密度泛函中可以使用局域(自旋)密度近似(LDA)或广义梯度近似(GGA)。WIEN2k使用全电子方案,包含相对论影响。
WIEN2k17.1软件包支持MPI并行、OpenMP并行及串行等,无需root权限即可安装,用户可以安装在自己目录下。本文仅针对采用Intel编译环境(编译器、MKL、MPI)及FFTW3做下说明。
/opt/intel/compilers_and_libraries_2017.4.196/linux/bin/intel64/ifort如显示不存在,那么可以类似下面设置Intel编译器环境(具体路径与你所使用的系统有关):
/opt/intel/compilers_and_libraries_2017.4.196/linux/mkl如显示不存在,那么可以类似下面设置Intel MKL环境(最好与编译器版本一致):
/opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/bin/mpiifort如显示不存在,那么可以类似下面设置Intel MPI环境:
WIEN2k支持FFTW MPI并行,本人不确定Intel MKL带有的是否支持,此处采用源码编译FFTW3,并打开MPI支持。
********************************************************* * W I E N * * site configuration * ********************************************************* Last configuration: 2017年 07月 08日 星期六 14:01:41 CST Wien Version: WIEN2k_17.1 (Release 30/6/2017) System: linuxifc S Specify a System C Specify Compiler O Compiling Options (Compiler/Linker, Libraries) P Configure Parallel Execution D Dimension Parameters R Compile/Recompile U Update a package L Perl Path (if not in /usr/bin/perl) T Temp Path Q Quit Selection:分别根据前面对应的单个字母(不区分大小写)设置相应选项:
************************************************************************** * Specify a system * ************************************************************************** Current system is: unknown LI Linux (Intel ifort compiler (12.0 or later)+mkl+intelmpi)) LS Linux+SLURM-batchsystem (Intel ifort (12.0 or later)+mkl+intelmpi) LG Linux (gfortran compiler + OpenBlas) M show more, not updated older options (not recommended) Q Quit选择LI,采用Linux (Intel ifort compiler (12.0 or later)+mkl+intelmpi))(linuxifc),设置完后将保存在WIEN2k_SYSTEM文件中,以后可以修改此文件后再运行siteconfig_lawp进行设置,下面几步中的类似。
Recommended setting for f90 compiler: ifort Current selection: ifort Your compiler:直接回车或输入ifort回车,采用Intel Fortran编译器,设置后将保存在COMPILER文件中。
Recommended setting for C compiler: cc Current selection: icc Your compiler:直接回车或输入icc回车,采用Intel C编译器,设置后将保存在COMPILERC文件中。
会提示已经找到MKL环境:
*********************************************************************** * Specify compiler and linker options * *********************************************************************** Since intel changes the name of the mkl-libraries from version to version, you may find the linking options for the most recent ifort version at http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/ Recommended options for system linuxifc are: Compiler options: -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread Preprocessor flags: '-DParallel' R_LIB (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread Current settings: O Compiler options: -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include L Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread P Preprocessor flags '-DParallel' R R_LIBS (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread X LIBX options: LIBXC-LIBS: PO Parallel options S Save and Quit Q Quit and abandon changes To change an item select option. Selection:回车后显示(Current因为我已经设置过,所以如此显示):
ecommended options for system linuxifc are: Compiler options: -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread Preprocessor flags: '-DParallel' R_LIB (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread Current settings: O Compiler options: -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -O2 -axavx -fp-model source -assume buffered_io F FFTW options: -DFFTW3 -I/$(HOME)/local/include L Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) P Preprocessor flags '-DParallel' R R_LIB (LAPACK+BLAS): -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread FL FFTW_LIBS: -lfftw3_mpi -lfftw3 -L/$(HOME)/local/lib S Save and Quit Q Quit abandon changes To change an item select option. Selection:根据提示,分别按O、F、L、R、RL设置相应选项,确保设置后为:
Current settings: O Compiler options: -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include L Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread P Preprocessor flags '-DParallel' R R_LIBS (LAPACK+BLAS): -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread X LIBX options: LIBXC-LIBS:设置各项后记得按S保存退出。设置后会保存在WIEN2k_OPTIONS文件中。
*********************************************************************** * Configure parallel execution * *********************************************************************** These options are stored in parallel_options of your WIENROOT. You can change them later manually or with siteconfig. If you have only ONE multi-core node (ONE shared memory system) it is normally better to start jobs in the background rather than using remote commands. If you select a shared memory system WIEN will by default not use remote shell commands (USE_REMOTE and MPI_REMOTE = 0 in parallel_options) and set the default granularity to 1. You still can override this default granularity in your .machines file. You may also set a specific TASKSET command to bind your executables to a specific core on multicore machines. If you have A CLUSTER OF shared memory parallel computers answer next question with N Shared Memory Architecture? (y/N):
Do you know/need a command to bind your jobs to specific nodes? (like taskset -c). Enter N / your_specific_command:输入N回车
On most mpi2-versions, it is better to start an mpijob on the original machine and not via ssh on a remote system. If you are using mpi2 set MPI_REMOTE to 0 Set MPI_REMOTE to 0 / 1:输入0回车
*********************************************************************** * Configure parallel execution * *********************************************************************** Parallel execution makes use of remote shells. On most computers these are named "rsh or ssh". . Please specify the name of the remote shell command: On linuxifc systems the remote shell is normally ssh, which will be used as default. Remote shell (default is ssh) =输入ssh回车,显示:
*********************************************************************** * Configure parallel execution * *********************************************************************** Parallel execution makes use of remote shells. On most computers these are named "rsh or ssh". . Please specify the name of the remote shell command: On linuxifc systems the remote shell is normally ssh, which will be used as default. Remote shell (default is ssh) = ssh Changing lapw1para Changing lapwsopara Changing lapw2para Changing lapwdmpara Changing opticpara Changing irreppara Changing qtlpara Changing hfpara Changing dstartpara Changing vec2old Changing testpara Changing x_nmr Done. Press RETURN to continue
************************************************************************** Do you have MPI, ScaLAPACK, ELPA, or FFTW installed and intend to run finegrained parallel? This is useful only for BIG cases (50 atoms and more / unit cell) and your HARDWARE has at least 16 cores (or is a cluster with Infiniband) You need to KNOW details about your installed MPI, ELPA, and FFTW ) (y/N)
Recommended setting for parallel f90 compiler (default): mpiifort Current selection: mpiifort Your compiler:输入mpiifort后回车,显示:
Your parallel compiler will be: mpiifort sed:无法读取 /opt/src/WIEN2k_17.1/WIEN2k_COMPILER:没有那个文件或目录 Do you want to use a present ScaLAPACK installation? (Y,n):
To abort the ScaLAPACK setup enter 'x' at any point! You seem to have an MKL installation. (MKLROOT: /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl) Do you want to use the MKL version of ScaLAPACK? (Y,n):输入y回车
Your SCALAPACK_LIBS are: -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 These options derive from your chosen settings: SCALAPACKROOT: /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/ SCALAPACK_LIBNAME: mkl_scalapack_lp64 BLACSROOT: /opt/intel/compilers_and_libraries_2017.4.196/linux/mkl/lib/ BLACS_LIBNAME: mkl_blacs_intelmpi_lp64 MKL_TARGET_ARCH: intel64 Is this correct? (Y,n):
Do you want to use a present FFTW installation? (Y,n):输入y回车
To abort the FFTW setup enter 'x' at any point! Do you want to automatically search for FFTW installations? (Y,n):输入y回车
Is this correct? (Y,n): Do you want to use a present FFTW installation? (Y,n): To abort the FFTW setup enter 'x' at any point! Do you want to automatically search for FFTW installations? (Y,n):输入n回车
Your present FFTW choice is: Please specify whether you want to use FFTW3 (default) or FFTW2 (FFTW3 / FFTW2):输入FFTW3回车
Present FFTW root directory is: Please specify the path of your FFTW installation (like /opt/fftw3/) or accept present choice (enter): /opt/fftw/3.3.6-p12/intel/2017.6.196 The present target architecture of your FFTW library is: lib64 Please specify the target achitecture of your FFTW library (e.g. lib64) or accept present choice (enter):回车
The present name of your FFTW library: fftw3 Please specify the name of your FFTW library or accept present choice (enter):回车
Your FFTW_OPT are: -DFFTW3 -I/opt/fftw/3.3.6-p12/intel/2017.4.196/include Your FFTW_LIBS are: -L/opt/fftw/3.3.6-p12/intel/2017.4.196/lib64 -lfftw3 Your FFTW_PLIBS are: -lfftw3_mpi These options derive from your chosen Settings: FFTWROOT: /opt/fftw/3.3.6-p12/intel/2017.4.196/ FFTW_VERSION: FFTW3 FFTW_LIB: lib64 FFTW_LIBNAME: fftw3 Is this correct? (Y,n):
Do you want to use ELPA? (y,N):根据自己需要设置,我这里不使用,输入N回车
*********************************************************************** * Configure parallel execution * *********************************************************************** Since intel changes the name of the mkl-libraries frequently you may find the linking options for the most recent ifort version at http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/ You need to specify MPI and parallel libraries done in previous step, options for parallel compilation in FPOPT, and how to run mpi jobs in MPIRUN (during execution _NP_ will be substituted by the "number of processors" _EXEC_ by the "executable" and _HOSTS_ by the name of the machines file). For calculations on SLURM batch systems you have to additionally specify number of cores per node in CORES_PER_NODE, a command to bind tasks to cpus in PINNING_COMMAND (optional), and an ordered list of physical cores in PINNING_LIST (optional). Recommended options for system linuxifc are: FPOPT(par.comp.options) : -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include MPIRUN command : mpirun -np _NP_ -machinefile _HOSTS_ _EXEC_ Please specify your parallel compiler options or accept the recommendations (Enter - default)!:回车
*********************************************************************** * Summary of parallel settings * *********************************************************************** Current settings: Parallel compiler : mpiifort SCALAPACK_LIBS : -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 FFTW_OPT : -DFFTW3 -I/opt/fftw/3.3.6-p12/intel/2017.4.196/include FFTW_LIBS : -L/opt/fftw/3.3.6-p12/intel/2017.4.196/lib64 -lfftw3 FFTW_PLIBS : -lfftw3_mpi ELPA_OPT : ELPA_LIBS : FPOPT(par.comp.options): -O1 -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -I$(MKLROOT)/include MPIRUN command : mpirun S Accept, Save, and Quit R Restart Configuration Q Quit and abandon changes Please accept and save these settings, restart the configuration, or abandon your changes. If you want to change anything later on you can redo this whole configuration process or you can change single items in "Compiling Options". Selection:输入s回车 设定完后将保存在WIEN2k_MPI文件中
Do you know/need a command to bind your jobs to specific nodes ? (like taskset -c). Enter N / your_specific_command:
On most mpi-2 versions, it is better to start an mpijob on the original machine and not via ssh on a remote system. If you are using mpi2 set MPI_REMOTE to 0 Set MPI_REMOTE to 0 / 1:
Remote shell (default is ssh) =
This is useful only for BIG cases (50 atoms and more / unit cell) and your HARDWARE has at least 16 cores (or is a cluster with Infiniband) You need to KNOW details about your installed MPI and FFTW )
Finding the required fftw2/3 mpi-files in /usr and /opt ....设定FFTW2或3,采用FFTW3:输入FFTW3回车
Please specify the ROOT-path of your FFTW installation (like /opt/fftw3):此处选择采用本文前面编译的,输入/home/nic/hmli/local回车,显示:
Your FFTW_LIBS are: -lfftw3_mpi -lfftw3 -L/home/nic/hmli/local/lib Your FFTW_OPT are : -DFFTW3 -I/home/nic/hmli/local/include如正确,则输入Y回车确认。
A Compile all programs (suggested) S Select program Q Quit Selection:请根据需要选择部分编译(S)或全编译(A),此处选择A进行全编译。然后就开始漫长的编译了,编译最后会提示是否有问题。