Currently Loaded Modulefiles: 1) modules/3.2.11.4 2) cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614 3) Base-opts/2.4.142-7.0.2.1_2.21__g8f27585.ari 4) cce/10.0.2 5) craype-network-aries 6) craype/2.7.0 7) cray-libsci/20.06.1 8) pmi/5.0.16 9) rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari 10) atp/3.7.4 11) perftools-base/20.08.0 12) PrgEnv-cray/6.0.8 13) cray-mpich/7.7.15 14) slurm/20.02.2-1 15) craype-haswell 16) xalt/2.8.10 17) daint-gpu 18) cudatoolkit/10.2.89_3.28-7.0.2.1_2.17__g52c0314 19) CMake/3.14.5 + umask 0002 + mkdir --mode=0775 -p /scratch/snx3000/jenkg90/jenkins-g90-DBCSR-580.cray + cd /scratch/snx3000/jenkg90/jenkins-g90-DBCSR-580.cray + export CRAY_CUDA_MPS=1 + CRAY_CUDA_MPS=1 + export OMP_PROC_BIND=TRUE + OMP_PROC_BIND=TRUE + env + tee -a test.out ghprbPullId=391 ASSEMBLER_AARCH64=/opt/cray/pe/cce/10.0.2/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/bin/as PE_TPSL_DEFAULT_GENCOMPILERS_GNU_x86_skylake=8.2 XALT_DIR=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10 CRAYPAT_ALPS_COMPONENT=/opt/cray/pe/perftools/20.08.0/sbin/pat_alps CRAYPE_LINK_TYPE=dynamic PE_LIBSCI_DEFAULT_VOLATILE_PRGENV=CRAYCLANG GNU INTEL PE_ATP_PKGCONFIG_VARIABLES=ATP_CFLAGS_@prgenv@_@language@ LD_LIBRARY_PATH=/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/lib64:/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/extras/CUPTI/lib64:/opt/cray/pe/mpt/7.7.15/gni/mpich-cray/9.0/lib:/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0/lib:/opt/cray/pe/perftools/20.08.0/lib64:/opt/cray/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari/lib64:/opt/cray/pe/pmi/5.0.16/lib64:/opt/cray/pe/libsci/20.06.1/CRAYCLANG/9.0/x86_64/lib:/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/lib:/opt/cray/pe/cce/10.0.2/cce/x86_64/lib:/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/lib64:/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/lib:/opt/cray/pe/papi/6.0.0.2/lib64 PE_TPSL_64_DEFAULT_GENCOMPILERS_GNU_sandybridge=8.2 PE_TPSL_64_DEFAULT_GENCOMPS_CRAYCLANG_x86_64=90 HOSTTYPE=x86_64 ghprbPullTitle=Stand-alone benchmarks and test for ACC/LIBSMM interface ALPS_LLI_STATUS_OFFSET=1 ghprbActualCommitAuthorEmail=hans.pabst@intel.com GIT_COMMIT=3ff423ee90da1d93b688fc4353ded871b08fc209 PE_TPSL_64_DEFAULT_GENCOMPILERS_GNU_x86_skylake=8.2 ATP_IGNORE_SIGTERM=1 XTPE_NETWORK_TARGET=aries CSCS_CUSTOM_ENV=true RUN_DISPLAY_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/580/display/redirect PE_TPSL_DEFAULT_GENCOMPS_CRAYCLANG_haswell=90 PE_FFTW_DEFAULT_TARGET_share=share PE_FFTW_DEFAULT_TARGET_ivybridge=ivybridge SLURM_NODEID=0 JENKINS_URL=https://lisone.cscs.ch/ SLURM_TASK_PID=7031 PE_TRILINOS_DEFAULT_GENCOMPS_CRAYCLANG_x86_64=90 PKG_CONFIG_PATH_DEFAULT=/opt/cray/pe/papi/6.0.0.2/lib64/pkgconfig PRGENVMODULES=PrgEnv-cray:PrgEnv-gnu:PrgEnv-intel:PrgEnv-pgi EXECUTOR_NUMBER=0 PE_TPSL_64_DEFAULT_GENCOMPILERS_INTEL_sandybridge=19.0 PE_TRILINOS_DEFAULT_GENCOMPS_GNU_x86_64=82 PE_LIBSCI_OMP_REQUIRES= SSH_CONNECTION=148.187.144.90 51824 148.187.26.98 22 PE_MPICH_NV_LIBS_nvidia35=-lcudart LESSCLOSE=lessclose.sh %s %s PE_SMA_DEFAULT_DIR_PGI_DEFAULT64=64 CRAY_LD_LIBRARY_PATH=/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/lib64:/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/extras/CUPTI/lib64:/opt/cray/pe/mpt/7.7.15/gni/mpich-cray/9.0/lib:/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0/lib:/opt/cray/pe/perftools/20.08.0/lib64:/opt/cray/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari/lib64:/opt/cray/pe/pmi/5.0.16/lib64:/opt/cray/pe/libsci/20.06.1/CRAYCLANG/9.0/x86_64/lib:/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/lib:/opt/cray/pe/cce/10.0.2/cce/x86_64/lib PE_MPICH_DEFAULT_DIR_CRAY_DEFAULT64=64 PE_PAPI_DEFAULT_ACCEL_FAMILY_LIBS_nvidia=,-lcupti,-lcudart,-lcuda PE_LIBSCI_ACC_DEFAULT_PKGCONFIG_VARIABLES=PE_LIBSCI_ACC_DEFAULT_NV_SUFFIX_@accelerator@ PE_TRILINOS_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/trilinos/12.18.1.1/@PRGENV@/@PE_TRILINOS_DEFAULT_GENCOMPS@/@PE_TRILINOS_DEFAULT_TARGET@/lib/pkgconfig XKEYSYMDB=/usr/X11R6/lib/X11/XKeysymDB SLURM_PRIO_PROCESS=0 PE_ENV=CRAY LINKER_AARCH64=/opt/cray/pe/cce/10.0.2/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/bin/ld PE_TPSL_64_DEFAULT_GENCOMPS_INTEL_x86_skylake=190 CRAY_MPICH2_DIR=/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0 PE_LIBSCI_DEFAULT_GENCOMPS_GNU_x86_64=81 PE_PETSC_DEFAULT_GENCOMPS_GNU_x86_skylake=82 PE_PETSC_DEFAULT_GENCOMPILERS_CRAYCLANG_haswell=9.0 CRAYPAT_LD_LIBRARY_PATH=/opt/cray/pe/gcc-libs:/opt/cray/gcc-libs:/opt/cray/pe/perftools/20.08.0/lib64 PE_PETSC_DEFAULT_VOLATILE_PRGENV=CRAYCLANG CRAYCLANG64 GNU GNU64 INTEL INTEL64 APPS_CSCS=/apps/cscs/daint FTN_X86_64=/opt/cray/pe/cce/10.0.2/cce/x86_64 PE_PRODUCT_LIST=CRAYPE_HASWELL:CRAY_RCA:CRAY_PMI:CRAY_LIBSCI:CRAYPE:CRAY:PERFTOOLS:CRAYPAT PE_TPSL_64_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH:PE_LIBSCI CRAY_CRAYPE_PREFIX=/opt/cray/pe/craype/2.7.0 CRAYPAT_ROOT=/opt/cray/pe/perftools/20.08.0 PE_TPSL_DEFAULT_GENCOMPILERS_GNU_x86_64=8.2 PE_LIBSCI_MODULE_NAME=cray-libsci/20.06.1 PE_TPSL_64_DEFAULT_GENCOMPS_GNU_x86_skylake=82 PE_MPICH_DEFAULT_DIR_CRAYCLANG_DEFAULT64=64 CRAY_BINUTILS_ROOT_X86_64=/opt/cray/pe/cce/10.0.2/binutils/x86_64/x86_64-pc-linux-gnu/../ SLURM_SUBMIT_DIR=/users/jenkg90/workspace/g90/DBCSR BUILD_ID=580 PE_MPICH_CXX_PKGCONFIG_LIBS=mpichcxx PE_TPSL_DEFAULT_GENCOMPS_CRAYCLANG_x86_skylake=90 ghprbActualCommit=198abfb81e8dd9475fdc19a175c7ffc8a3d0cd33 PE_TPSL_DEFAULT_GENCOMPILERS_INTEL_x86_skylake=19.0 PE_TPSL_64_DEFAULT_GENCOMPS_GNU_haswell=82 WINDOWMANAGER=/usr/bin/mate-session PE_TPSL_64_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_skylake=9.0 LESS=-M -I -R PE_LIBSCI_DEFAULT_GENCOMPS_CRAYCLANG_x86_64=90 PE_PETSC_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/petsc/3.13.3.0/complex/@PRGENV@/@PE_PETSC_DEFAULT_GENCOMPS@/@PE_PETSC_DEFAULT_TARGET@/lib/pkgconfig ATP_INSTALL_DIR=/opt/cray/pe/atp/3.7.4 JAVA_ROOT=/usr/lib64/jvm/java XALT_BINARYDATA_SIZE=5000 PE_TPSL_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_skylake=9.0 HOSTNAME=nid03400 SLURM_CSCS=yes PE_TPSL_DEFAULT_GENCOMPILERS_CRAYCLANG_sandybridge=9.0 ghprbPullAuthorLogin=hfp OLDPWD=/users/jenkg90/workspace/g90/DBCSR RUN_CHANGES_DISPLAY_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/580/display/redirect?page=changes ghprbAuthorRepoGitUrl=https://github.com/hfp/dbcsr.git PE_TPSL_DEFAULT_GENCOMPS_GNU_haswell=82 APPS=/apps/daint CSHEDIT=emacs PE_TPSL_DEFAULT_GENCOMPILERS_GNU_haswell=8.2 SLURM_CPUS_PER_TASK=3 CRAY_RCA_INCLUDE_OPTS=-I/opt/cray/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari/include -I/opt/cray/krca/2.2.7-7.0.2.1_2.22__ge897ee1.ari/include -I/opt/cray-hss-devel/9.0.0/include OMP_PROC_BIND=TRUE PE_TPSL_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/tpsl/20.03.2/@PRGENV@/@PE_TPSL_DEFAULT_GENCOMPS@/@PE_TPSL_DEFAULT_TARGET@/lib/pkgconfig ENVIRONMENT=BATCH EBVERSIONCMAKE=3.14.5 GPG_TTY=not a tty LESS_ADVANCED_PREPROCESSOR=no PE_MPICH_DIR_CRAY_DEFAULT64=64 PE_ATP_MODULE_NAME=atp PE_MPICH_GENCOMPS_CRAY=90 ghprbPullLongDescription=Added some test code (carry-forward from OpenMP/backend experiments), which exercises the ACC interface in an abstract fashion (backend agnostic); building this test may be integrated later with CMake infrastructure. Added two benchmark drivers (matrix transpose and multiplication). Added Makefile to build test and driver in a stand-alone fashion using the CUDA based backend (please note, the Makefile is not as sophisticated as the CMake based tool chain, however, it is meant to be simple or easy to adjust for the local environment). PE_PETSC_DEFAULT_GENCOMPS_CRAYCLANG_x86_skylake=90 PE_PETSC_DEFAULT_GENCOMPILERS_INTEL_x86_64=19.1 PE_FFTW_DEFAULT_TARGET_x86_64=x86_64 ghprbGhRepository=cp2k/dbcsr COLORTERM=1 JENKINS_NODE_COOKIE=7704c57c-dad3-4708-84f2-533c80fb75cd GCC_X86_64=/opt/gcc/8.1.0/snos ASSEMBLER_X86_64=/opt/cray/pe/cce/10.0.2/binutils/x86_64/x86_64-pc-linux-gnu/bin/as FPATH=:/opt/cray/pe/modules/3.2.11.4/init/sh_funcs/no_redirect:/opt/cray/pe/modules/3.2.11.4/init/sh_funcs/no_redirect PE_TPSL_64_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/tpsl/20.03.2/@PRGENV@64/@PE_TPSL_64_DEFAULT_GENCOMPS@/@PE_TPSL_64_DEFAULT_TARGET@/lib/pkgconfig PE_TPSL_64_DEFAULT_GENCOMPS_CRAYCLANG_x86_skylake=90 PE_TPSL_64_DEFAULT_GENCOMPILERS_CRAYCLANG_sandybridge=9.0 CRAY_PERFTOOLS_VERSION=20.08.0 ROCR_VISIBLE_DEVICES=0 PE_PKGCONFIG__PRODUCTS=PE_ATP PE_PETSC_DEFAULT_GENCOMPS_GNU_sandybridge=82 CRAY_CUDA_MPS=1 EBROOTCMAKE=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/software/CMake/3.14.5 PE_NETCDF_HDF5PARALLEL_DEFAULT_REQUIRED_PRODUCTS=PE_HDF5_PARALLEL SQUEUE_SORT=-t,e,S ATP_CFLAGS= JAVA_HOME=/usr/lib64/jvm/java COMPILERRT_PATH_X86_64=/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/lib/clang/10.0.0/lib/linux PE_LIBSCI_GENCOMPILERS_CRAYCLANG_x86_64=9.0 PE_PETSC_DEFAULT_GENCOMPILERS_INTEL_x86_skylake=19.1 PE_FFTW_DEFAULT_TARGET_x86_skylake=x86_skylake SQUEUE_FORMAT=%.8i %.8u %.7a %.14j %.3t %9r %19S %.10M %.10L %.5D %.4C APP2_STATE=20.08.0 SLURM_PROCID=0 JOB_BASE_NAME=DBCSR PE_LIBSCI_ACC_DEFAULT_GENCOMPS_GNU_x86_64=81 PE_MPICH_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/mpt/7.7.15/gni/mpich-@PRGENV@@PE_MPICH_DEFAULT_DIR_DEFAULT64@/@PE_MPICH_DEFAULT_GENCOMPS@/lib/pkgconfig LINKER_X86_64=/opt/cray/pe/cce/10.0.2/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld SLURM_JOB_GID=31350 WORKSPACE_TMP=/users/jenkg90/workspace/g90/DBCSR@tmp MACHTYPE=x86_64-suse-linux PE_FFTW_DEFAULT_TARGET_broadwell=broadwell XTPE_LINK_TYPE=dynamic ghprbTriggerAuthorLogin=codecov[bot] PE_TRILINOS_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_64=9.0 PE_PAPI_DEFAULT_ACCEL_LIBS= SLURMD_NODENAME=nid03400 PE_LIBSCI_ACC_DEFAULT_GENCOMPS_CRAYCLANG_x86_64=90 PE_SMA_DEFAULT_COMPFLAG= XALT_EXECUTABLE_TRACKING=yes CRAY_CUDATOOLKIT_POST_LINK_OPTS=-L/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/lib64 -L/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/extras/CUPTI/lib64 -Wl,--as-needed -Wl,-lcupti -Wl,-lcudart -Wl,--no-as-needed -L/opt/cray/nvidia/default/lib64 -lcuda PE_PKGCONFIG_PRODUCTS=PE_MPICH:PE_LIBSCI PE_CRAY_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/ga/5.3.0.10/CRAY/8.6/lib/pkgconfig PE_PETSC_DEFAULT_GENCOMPS_CRAYCLANG_x86_64=90 CRAY_MPICH_BASEDIR=/opt/cray/pe/mpt/7.7.15/gni PE_PETSC_DEFAULT_GENCOMPS_INTEL_sandybridge=191 PE_TPSL_64_DEFAULT_GENCOMPS_INTEL_sandybridge=190 PE_LIBSCI_GENCOMPILERS_GNU_x86_64=8.1 MINICOM=-c on SLURM_TASKS_PER_NODE=4 PAT_BUILD_PAPI_LIBDIR=/opt/cray/pe/papi/6.0.0.2/lib64 XALT_GPU_TRACKING=no PE_MPICH_PKGCONFIG_VARIABLES=PE_MPICH_NV_LIBS_@accelerator@:PE_MPICH_ALTERNATE_LIBS_@multithreaded@:PE_MPICH_ALTERNATE_LIBS_@dpm@ PE_MPICH_PKGCONFIG_LIBS=mpich PE_PARALLEL_NETCDF_DEFAULT_FIXED_PRGENV=PGI INTEL CRAYCLANG GNU QT_SYSTEM_DIR=/usr/share/desktop-data OSTYPE=linux PE_LIBSCI_ACC_DEFAULT_NV_SUFFIX_nvidia60=nv60 PE_LEVEL=10.0 GIT_URL=https://github.com/cp2k/dbcsr.git PE_NETCDF_DEFAULT_REQUIRED_PRODUCTS=PE_HDF5 PE_MPICH_NV_LIBS= PE_FFTW2_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH HUDSON_COOKIE=97afbbcb-f85f-4420-a10b-751a2dd7164c PE_PETSC_DEFAULT_GENCOMPS_GNU_x86_64=82 XDG_SESSION_ID=21109 PE_TPSL_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH:PE_LIBSCI PE_TPSL_DEFAULT_GENCOMPS_GNU_sandybridge=82 SLURM_NNODES=1 USER=jenkg90 PAGER=less MODULE_VERSION=3.2.11.4 CRAY_CXX_IPA_LIBS_AARCH64=/opt/cray/pe/cce/10.0.2/cce/aarch64/lib/libcray-c++-rts.a PE_TPSL_DEFAULT_GENCOMPS_CRAYCLANG_x86_64=90 SHMEM_ABORT_ON_ERROR=1 PE_PKG_CONFIG_PATH=/opt/cray/pe/valgrind4hpc/2.7.2/lib/pkgconfig:/opt/cray/pe/cti/2.7.4/lib/pkgconfig:/opt/cray/pe/atp/3.7.4/lib/pkgconfig PE_CRAYCLANG_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-netcdf/1.12.1.0/CRAYCLANG/9.0/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5parallel/4.7.4.0/CRAYCLANG/9.0/lib/pkgconfig:/opt/cray/pe/netcdf/4.7.4.0/CRAYCLANG/9.0/lib/pkgconfig:/opt/cray/pe/hdf5-parallel/1.12.0.0/CRAYCLANG/9.0/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.0.0/CRAYCLANG/9.0/lib/pkgconfig SLURM_NTASKS_PER_CORE=1 ghprbActualCommitAuthor=Hans Pabst PE_MPICH_DEFAULT_GENCOMPILERS_CRAY=9.0 PE_LIBSCI_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH PE_LIBSCI_ACC_DEFAULT_NV_SUFFIX_nvidia35=nv35 TOOLMODULES=apprentice:apprentice2:atp:chapel:cray-lgdb:cray-snplauncher:craypat:craypkg-gen:ddt:gdb:iobuf:papi:perftools:perftools-lite:stat:totalview:xt-craypat:xt-lgdb:xt-papi:xt-totalview BUILD_NUMBER=580 ghprbTargetBranch=develop CRAY_CPU_TARGET=haswell PE_TPSL_64_DEFAULT_GENCOMPILERS_GNU_x86_64=8.2 PE_LIBSCI_GENCOMPILERS_INTEL_x86_64=16.0 PE_INTEL_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-netcdf/1.12.1.0/INTEL/19.1/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5parallel/4.7.4.0/INTEL/19.1/lib/pkgconfig:/opt/cray/pe/netcdf/4.7.4.0/INTEL/19.1/lib/pkgconfig:/opt/cray/pe/mpt/7.7.15/gni/mpich-INTEL/16.0/lib/pkgconfig:/opt/cray/pe/hdf5-parallel/1.12.0.0/INTEL/19.1/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.0.0/INTEL/19.1/lib/pkgconfig:/opt/cray/pe/ga/5.3.0.10/INTEL/18.0/lib/pkgconfig PE_GA_DEFAULT_GENCOMPS_GNU=82 73 PE_SMA_DEFAULT_PKGCONFIG_VARIABLES=PE_SMA_COMPFLAG_@prgenv@ PE_LIBSCI_VOLATILE_PRGENV=CRAYCLANG GNU INTEL KSH_AUTOLOAD=1 PE_MPICH_GENCOMPILERS_PGI=20.1 PE_TPSL_64_DEFAULT_GENCOMPS_CRAYCLANG_sandybridge=90 SLURM_NTASKS_PER_NODE=4 PKGCONFIG_ENABLED=1 PE_PETSC_DEFAULT_GENCOMPILERS_CRAYCLANG_sandybridge=9.0 PE_MPICH_GENCOMPS_GNU=82 81 71 MORE=-sl PE_PAPI_DEFAULT_ACCEL_LIBS_nvidia35=,-lcupti,-lcudart,-lcuda CRAY_PERFTOOLS_PREFIX=/opt/cray/pe/perftools/20.08.0 PE_FORTRAN_PKGCONFIG_LIBS=mpichf90 PE_MPICH_DEFAULT_GENCOMPILERS_CRAYCLANG=9.0 WORKSPACE=/users/jenkg90/workspace/g90/DBCSR ghprbPullDescription=GitHub pull request #391 of commit 198abfb81e8dd9475fdc19a175c7ffc8a3d0cd33, no merge conflicts. PE_TRILINOS_DEFAULT_GENCOMPILERS_INTEL_x86_64=19.1 PE_TPSL_64_DEFAULT_GENCOMPS_CRAYCLANG_haswell=90 CRAY_CXX_IPA_LIBS=/opt/cray/pe/cce/10.0.2/cce/x86_64/lib/libcray-c++-rts.a PE_MPICH_GENCOMPILERS_CRAY=9.0 CRAY_LIBSCI_BASE_DIR=/opt/cray/pe/libsci/20.06.1 PE_NETCDF_HDF5PARALLEL_DEFAULT_FIXED_PRGENV=GNU CRAYCLANG PGI INTEL PWD=/scratch/snx3000/jenkg90/jenkins-g90-DBCSR-580.cray TARGETMODULES=craype-abudhabi:craype-abudhabi-cu:craype-accel-host:craype-accel-nvidia20:craype-accel-nvidia30:craype-accel-nvidia35:craype-barcelona:craype-broadwell:craype-haswell:craype-hugepages128K:craype-hugepages128M:craype-hugepages16M:craype-hugepages256M:craype-hugepages2M:craype-hugepages32M:craype-hugepages4M:craype-hugepages512K:craype-hugepages512M:craype-hugepages64M:craype-hugepages8M:craype-intel-knc:craype-interlagos:craype-interlagos-cu:craype-istanbul:craype-ivybridge:craype-mc12:craype-mc8:craype-mic-knl:craype-network-aries:craype-network-gemini:craype-network-infiniband:craype-network-none:craype-network-seastar:craype-sandybridge:craype-shanghai:craype-target-compute_node:craype-target-local_host:craype-target-native:craype-xeon:xtpe-barcelona:xtpe-interlagos:xtpe-interlagos-cu:xtpe-istanbul:xtpe-mc12:xtpe-mc8:xtpe-network-gemini:xtpe-network-seastar:xtpe-shanghai:xtpe-target-native:xtpe-xeon HUDSON_URL=https://lisone.cscs.ch/ PE_MPICH_NV_LIBS_nvidia20=-lcudart SLURM_JOB_NODELIST=nid03400 HOME=/users/jenkg90 CRAY_PMI_INCLUDE_OPTS=-I/opt/cray/pe/pmi/5.0.16/include SLURM_CLUSTER_NAME=daint PE_TPSL_DEFAULT_GENCOMPILERS_INTEL_sandybridge=19.0 PE_TPSL_64_DEFAULT_GENCOMPILERS_GNU_haswell=8.2 CRAYLIBS_AARCH64=/opt/cray/pe/cce/10.0.2/cce/aarch64/lib NODE_NAME=g90_daintvm1 PELOCAL_PRGENV=true PE_PETSC_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_skylake=9.0 PE_TPSL_DEFAULT_GENCOMPS_CRAYCLANG_sandybridge=90 PE_PETSC_DEFAULT_GENCOMPS_CRAYCLANG_haswell=90 PE_TPSL_64_DEFAULT_GENCOMPS_GNU_sandybridge=82 CMAKE_PREFIX_PATH=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/software/CMake/3.14.5:/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10 craype_already_loaded=0 PE_LIBSCI_REQUIRED_PRODUCTS=PE_MPICH ATP_HOME=/opt/cray/pe/atp/3.7.4/alps SLURM_NODELIST=nid03400 HOST=nid03400 HUDSON_SERVER_COOKIE=539a71430b859313 SSH_CLIENT=148.187.144.90 51824 22 ALT_LINKER=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/bin/ld PE_PETSC_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH:PE_LIBSCI:PE_HDF5_PARALLEL:PE_TPSL CRAY_PRGENVCRAY=loaded PE_TPSL_64_DEFAULT_GENCOMPILERS_INTEL_haswell=19.0 SINFO_FORMAT=%9P %5a %8s %.10l %.6c %.6z %.7D %10T %N XNLSPATH=/usr/share/X11/nls CPATH=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/include PE_MPICH_FORTRAN_PKGCONFIG_LIBS=mpichf90 CHPL_CG_CPP_LINES=1 SLURM_NTASKS=4 PE_TPSL_64_DEFAULT_GENCOMPILERS_INTEL_x86_64=19.0 PE_LIBSCI_PKGCONFIG_LIBS=libsci_mpi:libsci PE_LIBSCI_ACC_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/libsci_acc/20.06.1/@PRGENV@/@PE_LIBSCI_ACC_DEFAULT_GENCOMPS@/@PE_LIBSCI_ACC_DEFAULT_TARGET@/lib/pkgconfig JENKINS_HOME=/var/lib/jenkins XALT_TRANSMISSION_STYLE=curl PE_TPSL_DEFAULT_GENCOMPILERS_INTEL_haswell=19.0 PE_MPICH_DEFAULT_GENCOMPS_CRAY=90 JOB_NAME=g90/DBCSR SDK_HOME=/usr/lib64/jvm/java PE_LIBSCI_OMP_REQUIRES_openmp=_mp RUN_TESTS_DISPLAY_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/580/display/redirect?page=tests SLURM_JOB_CPUS_PER_NODE=24 CRAY_CUDATOOLKIT_DIR=/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314 CRAYLIBS_X86_64=/opt/cray/pe/cce/10.0.2/cce/x86_64/lib XDG_DATA_DIRS=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/software/CMake/3.14.5/share:/usr/share CUDA_CACHE_PATH=/scratch/snx3000/jenkg90/.nv/ComputeCache PE_CXX_PKGCONFIG_LIBS=mpichcxx SLURM_TOPOLOGY_ADDR=s21.s8.nid03400 CRAY_BINUTILS_ROOT=/opt/cray/pe/cce/10.0.2/binutils/x86_64/x86_64-pc-linux-gnu/../ PE_SMA_DEFAULT_DIR_CRAYCLANG_DEFAULT64=64 PROJECT=/project/g90/jenkg90 cce_already_loaded=0 PE_TPSL_64_DEFAULT_GENCOMPS_INTEL_haswell=190 CRAY_RCA_POST_LINK_OPTS=-L/opt/cray/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari/lib64 -lrca PE_MPICH_DEFAULT_GENCOMPS_PGI=201 PE_MPICH_MODULE_NAME=cray-mpich NLSPATH=/opt/cray/pe/cce/10.0.2/cce/x86_64/share/nls/En/%N.cat CRAY_LIBSCI_DIR=/opt/cray/pe/libsci/20.06.1 LIBGL_DEBUG=quiet SLURM_WORKING_CLUSTER=daint:daintsl01:6817:8960:108 PE_LIBSCI_DEFAULT_OMP_REQUIRES= PE_LIBSCI_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/libsci/20.06.1/@PRGENV@/@PE_LIBSCI_GENCOMPS@/@PE_LIBSCI_TARGET@/lib/pkgconfig PE_MPICH_TARGET_VAR_nvidia35=-lcudart JDK_HOME=/usr/lib64/jvm/java ATP_VERSION=3.7.4 HUDSON_HOME=/var/lib/jenkins PE_TPSL_DEFAULT_GENCOMPILERS_GNU_sandybridge=8.2 COMPILER_PATH=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/bin JOB_DISPLAY_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/display/redirect LIBSCI_VERSION=20.06.1 SLURM_JOB_NAME=DBCSR.cray.test PROFILEREAD=true EBROOTXALT=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10 PE_MPICH_DIR_PGI_DEFAULT64=64 PE_TRILINOS_DEFAULT_VOLATILE_PRGENV=CRAYCLANG GNU INTEL TMPDIR=/tmp LIBRARY_PATH=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/lib64:/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/lib PE_MPICH_DEFAULT_GENCOMPILERS_GNU=8.2 8.1 7.1 PERFTOOLS_VERSION=20.08.0 SLURM_JOB_GPUS=0 PE_LIBSCI_DEFAULT_OMP_REQUIRES_openmp=_mp CRAY_BINUTILS_VERSION=/opt/cray/pe/cce/10.0.2 CRAY_CCE_CLANGSHARE=/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/share PE_PKGCONFIG_LIBS=cray-cudatoolkit:mpich:libAtpSigHandler:cray-rca:libsci_mpi:libsci CRAY_PMI_PREFIX=/opt/cray/pe/pmi/5.0.16 SLURM_JOBID=27166243 RCLOCAL_PRGENV=true PE_TPSL_DEFAULT_GENCOMPS_GNU_x86_skylake=82 USERMODULES=PrgEnv-cray:PrgEnv-gnu:PrgEnv-intel:PrgEnv-pathscale:PrgEnv-pgi:acml:alps:apprentice:apprentice2:atp:blcr:cce:chapel:cray-ccdb:cray-fftw:cray-ga:cray-hdf5:cray-hdf5-parallel:cray-lgdb:cray-libsci:cray-libsci_acc:cray-mpich:cray-mpich-compat:cray-mpich2:cray-netcdf:cray-netcdf-hdf5parallel:cray-parallel-netcdf:cray-petsc:cray-petsc-complex:cray-shmem:cray-snplauncher:cray-tpsl:cray-trilinos:craypat:craype:craypkg-gen:cudatoolkit:ddt:fftw:ga:gcc:hdf5:hdf5-parallel:intel:iobuf:java:lgdb:libfast:libsci_acc:mpich1:netcdf:netcdf-hdf5parallel:netcdf-nofsync:netcdf-nofsync-hdf5parallel:ntk:onesided:papi:parallel-netcdf:pathscale:perftools:perftools-lite:petsc:petsc-complex:pgi:pmi:stat:totalview:tpsl:trilinos:xt-asyncpe:xt-craypat:xt-lgdb:xt-libsci:xt-mpich2:xt-mpt:xt-papi:xt-shmem:xt-totalview PAT_REPORT_PRUNE_NAME=_cray$mt_execute_,_cray$mt_start_,_cray$mt_kmpc_fork,__cray_hwpc_,f_cray_hwpc_,cstart,hip_impl::,hipLaunchKernelGGL,__pat_,pat_region_,PAT_,OMP.slave_loop,slave_entry,_new_slave_entry,_thread_pool_slave_entry,THREAD_POOL_join,__libc_start_main,_start,__start,start_thread,__wrap_,UPC_ADIO_,_upc_,upc_,__caf_,__pgas_,syscall,__device_stub,__cray_acc_hw,_ZZ,.omp_outlined. SLURM_CONF=/etc/opt/slurm/slurm.conf PE_PETSC_DEFAULT_GENCOMPILERS_GNU_sandybridge=8.2 PE_MPICH_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/mpt/7.7.15/gni/mpich-@PRGENV@@PE_MPICH_DIR_DEFAULT64@/@PE_MPICH_GENCOMPS@/lib/pkgconfig LOADEDMODULES=modules/3.2.11.4:cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614:Base-opts/2.4.142-7.0.2.1_2.21__g8f27585.ari:cce/10.0.2:craype-network-aries:craype/2.7.0:cray-libsci/20.06.1:pmi/5.0.16:rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari:atp/3.7.4:perftools-base/20.08.0:PrgEnv-cray/6.0.8:cray-mpich/7.7.15:slurm/20.02.2-1:craype-haswell:xalt/2.8.10:daint-gpu:cudatoolkit/10.2.89_3.28-7.0.2.1_2.17__g52c0314:CMake/3.14.5 PE_TPSL_DEFAULT_GENCOMPILERS_INTEL_x86_64=19.0 PE_LIBSCI_ACC_DEFAULT_GENCOMPILERS_GNU_x86_64=8.1 CRAY_CUDATOOLKIT_INCLUDE_OPTS=-I/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/include -I/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/extras/CUPTI/include -I/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/extras/Debugger/include CRAYPE_NETWORK_TARGET=aries PE_GA_DEFAULT_FIXED_PRGENV=CRAY PGI INTEL INCLUDE_PATH_AARCH64=/opt/cray/pe/cce/10.0.2/cce/aarch64/include/craylibs PE_MPICH_DEFAULT_GENCOMPILERS_PGI=20.1 CRAY_PE_USE_CLANG=/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/bin/clang PE_TPSL_DEFAULT_GENCOMPS_INTEL_haswell=190 ghprbCredentialsId=7120ebb7-143b-45a1-a34b-19d296edc540 SCRATCH=/scratch/snx3000/jenkg90 GCC_AARCH64=/opt/gcc-cross-aarch64/8.1.0/aarch64 RCLOCAL_BASEOPTS=true LIBRARYMODULES=acml:alps:cray-dwarf:cray-fftw:cray-ga:cray-hdf5:cray-hdf5-parallel:cray-libsci:cray-libsci_acc:cray-mpich:cray-mpich-abi:cray-mpich2:cray-netcdf:cray-netcdf-hdf5parallel:cray-parallel-netcdf:cray-petsc:cray-petsc-complex:cray-shmem:cray-tpsl:cray-trilinos:cudatoolkit:fftw:ga:hdf5:hdf5-parallel:iobuf:libfast:netcdf:netcdf-hdf5parallel:ntk:onesided:papi:petsc:petsc-complex:pmi:tpsl:trilinos:xt-libsci:xt-mpich2:xt-mpt:xt-papi PE_MPICH_ALTERNATE_LIBS_dpm=_dpm SLURM_NODE_ALIASES=(null) SLURM_JOB_QOS=normal SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node PE_FFTW_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/fftw/3.3.8.7/@PE_FFTW_DEFAULT_TARGET@/lib/pkgconfig PE_TPSL_DEFAULT_GENCOMPS_INTEL_x86_64=190 PE_TRILINOS_DEFAULT_GENCOMPS_INTEL_x86_64=191 PE_PAPI_DEFAULT_PKGCONFIG_VARIABLES=PE_PAPI_ACCEL_LIBS_@accelerator@ PE_PAPI_ACCEL_FAMILY_LIBS_@accelerator_family@ MPICH_ABORT_ON_ERROR=1 RUN_ARTIFACTS_DISPLAY_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/580/display/redirect?page=artifacts PE_LIBSCI_GENCOMPS_INTEL_x86_64=160 PE_LIBSCI_DEFAULT_GENCOMPILERS_INTEL_x86_64=16.0 FROM_HEADER= CRAY_MPICH_ROOTDIR=/opt/cray/pe/mpt/7.7.15 ALPS_APP_PE=0 MAIL=/var/mail/jenkg90 PE_MPICH_DEFAULT_DIR_PGI_DEFAULT64=64 PE_HDF5_PARALLEL_DEFAULT_FIXED_PRGENV=GNU CRAYCLANG PGI INTEL CRAY_CCE_SHARE=/opt/cray/pe/cce/10.0.2/cce/x86_64/share PE_MPICH_VOLATILE_PRGENV=PGI GNU CRAYCLANG CRAY PE_INTEL_FIXED_PKGCONFIG_PATH=/opt/cray/pe/mpt/7.7.15/gni/mpich-INTEL/16.0/lib/pkgconfig PE_PETSC_DEFAULT_GENCOMPS_CRAYCLANG_sandybridge=90 CRAY_BINUTILS_BIN_AARCH64=/opt/cray/pe/cce/10.0.2/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/bin SLURM_CPUS_ON_NODE=24 PE_HDF5_DEFAULT_FIXED_PRGENV=GNU CRAYCLANG PGI INTEL PE_MPICH_ALTERNATE_LIBS_multithreaded=_mt XALT_SCALAR_SAMPLING=no PE_TPSL_DEFAULT_GENCOMPS_INTEL_sandybridge=190 PE_TPSL_DEFAULT_GENCOMPS_GNU_x86_64=82 PE_LIBSCI_GENCOMPS_CRAYCLANG_x86_64=90 PE_LIBSCI_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_64=9.0 SLURM_JOB_NUM_NODES=1 PE_PGI_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-netcdf/1.12.1.0/PGI/20.1/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5parallel/4.7.4.0/PGI/20.1/lib/pkgconfig:/opt/cray/pe/netcdf/4.7.4.0/PGI/20.1/lib/pkgconfig:/opt/cray/pe/hdf5-parallel/1.12.0.0/PGI/20.1/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.0.0/PGI/20.1/lib/pkgconfig:/opt/cray/pe/ga/5.3.0.10/PGI/17.10/lib/pkgconfig PE_MPICH_GENCOMPS_PGI=201 PE_PETSC_DEFAULT_GENCOMPILERS_GNU_x86_skylake=8.2 PE_PETSC_DEFAULT_GENCOMPILERS_INTEL_haswell=19.1 PE_GA_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/ga/5.3.0.10/@PRGENV@/@PE_GA_DEFAULT_GENCOMPS@/lib/pkgconfig PE_FFTW_DEFAULT_TARGET_haswell=haswell SLURM_MEM_PER_NODE=61000 LESSKEY=/etc/lesskey.bin BUILD_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/580/ ghprbPullLink=https://github.com/cp2k/dbcsr/pull/391 CRAY_SITE_LIST_DIR=/etc/opt/cray/pe/modules SHELL=/usr/local/bin/bash PE_TPSL_64_DEFAULT_GENCOMPILERS_CRAYCLANG_haswell=9.0 STAGE_NAME=test PE_LIBSCI_ACC_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH:PE_LIBSCI PE_MPICH_GENCOMPILERS_CRAYCLANG=9.0 CRAY_BINUTILS_ROOT_AARCH64=/opt/cray/pe/cce/10.0.2/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/../ JOB_URL=https://lisone.cscs.ch/job/g90/job/DBCSR/ PE_MPICH_FIXED_PRGENV=INTEL CRAY_LIBSCI_PREFIX=/opt/cray/pe/libsci/20.06.1/CRAYCLANG/9.0/x86_64 ghprbCommentBody=# [Codecov](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=h1) Report\n> Merging [#391](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=desc) (198abfb) into [develop](https://codecov.io/gh/cp2k/dbcsr/commit/570d99f440656e88cff04479647906f9d388e6ca?el=desc) (570d99f) will **decrease** coverage by `0.8%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/cp2k/dbcsr/pull/391/graphs/tree.svg?width=650&height=150&src=pr&token=hCajNIZUiz)](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## develop #391 +/- ##\n=========================================\n- Coverage 62.1% 61.2% -0.9% \n=========================================\n Files 87 87 \n Lines 25741 25430 -311 \n=========================================\n- Hits 15997 15576 -421 \n- Misses 9744 9854 +110 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=tree) | Coverage ? | |\n|---|---|---|\n| [src/data/dbcsr\_data\_types.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2RhdGEvZGJjc3JfZGF0YV90eXBlcy5G) | `75.0% <0.0%> (-8.4%)` | :arrow_down: |\n| [src/block/dbcsr\_iterator\_operations.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2Jsb2NrL2RiY3NyX2l0ZXJhdG9yX29wZXJhdGlvbnMuRg==) | `78.3% <0.0%> (-6.3%)` | :arrow_down: |\n| [src/utils/dbcsr\_min\_heap.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL3V0aWxzL2RiY3NyX21pbl9oZWFwLkY=) | `32.3% <0.0%> (-4.6%)` | :arrow_down: |\n| [src/base/dbcsr\_machine.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2Jhc2UvZGJjc3JfbWFjaGluZS5G) | `38.0% <0.0%> (-3.8%)` | :arrow_down: |\n| [src/dist/dbcsr\_dist\_methods.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2Rpc3QvZGJjc3JfZGlzdF9tZXRob2RzLkY=) | `86.1% <0.0%> (-3.4%)` | :arrow_down: |\n| [src/mm/dbcsr\_mm\_accdrv.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL21tL2RiY3NyX21tX2FjY2Rydi5G) | `11.7% <0.0%> (-2.7%)` | :arrow_down: |\n| [src/acc/dbcsr\_acc\_devmem.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2FjYy9kYmNzcl9hY2NfZGV2bWVtLkY=) | `7.5% <0.0%> (-2.5%)` | :arrow_down: |\n| [src/base/dbcsr\_base\_hooks.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2Jhc2UvZGJjc3JfYmFzZV9ob29rcy5G) | `25.0% <0.0%> (-2.5%)` | :arrow_down: |\n| [src/dbcsr\_api.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL2RiY3NyX2FwaS5G) | `20.7% <0.0%> (-2.5%)` | :arrow_down: |\n| [src/mm/dbcsr\_mm\_multrec.F](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree#diff-c3JjL21tL2RiY3NyX21tX211bHRyZWMuRg==) | `68.2% <0.0%> (-2.4%)` | :arrow_down: |\n| ... and [46 more](https://codecov.io/gh/cp2k/dbcsr/pull/391/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `? = absolute (impact)`, `? = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=footer). Last update [570d99f...198abfb](https://codecov.io/gh/cp2k/dbcsr/pull/391?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n PE_TPSL_64_DEFAULT_GENCOMPS_GNU_x86_64=82 PMI_NO_FORK=1 PE_LIBSCI_ACC_DEFAULT_NV_SUFFIX_nvidia20=nv20 EBDEVELXALT=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/easybuild/xalt-2.8.10-easybuild-devel SLURM_JOB_UID=25581 BUILD_DISPLAY_NAME=#580 CRAY_LIBSCI_VERSION=20.06.1 PE_TPSL_DEFAULT_VOLATILE_PRGENV=CRAYCLANG CRAYCLANG64 GNU GNU64 INTEL INTEL64 XCURSOR_THEME=DMZ CRAYLMD_LICENSE_FILE=/opt/cray/pe/cce/cce.lic SLURM_JOB_PARTITION=cscsci PE_MPICH_DEFAULT_GENCOMPS_GNU=82 81 71 PE_LIBSCI_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/libsci/20.06.1/@PRGENV@/@PE_LIBSCI_DEFAULT_GENCOMPS@/@PE_LIBSCI_DEFAULT_TARGET@/lib/pkgconfig CRAY_CXX_IPA_LIBS_X86_64=/opt/cray/pe/cce/10.0.2/cce/x86_64/lib/libcray-c++-rts.a SLURM_TIME_FORMAT=relative CRAY_BINUTILS_BIN_X86_64=/opt/cray/pe/cce/10.0.2/binutils/x86_64/bin PE_LIBSCI_DEFAULT_PKGCONFIG_VARIABLES=PE_LIBSCI_DEFAULT_OMP_REQUIRES_@openmp@:PE_SCI_EXT_LIBPATH:PE_SCI_EXT_LIBNAME CC_X86_64=/opt/cray/pe/cce/10.0.2/cce/x86_64 PE_SMA_DEFAULT_COMPFLAG_GNU=-fcray-pointer PE_PETSC_DEFAULT_GENCOMPS_GNU_haswell=82 FORTRAN_SYSTEM_MODULE_NAMES=ftn_lib_definitions PE_SMA_DEFAULT_VOLATILE_PKGCONFIG_PATH=/opt/cray/pe/mpt/7.7.15/gni/sma@PE_SMA_DEFAULT_DIR_DEFAULT64@/lib64/pkgconfig PE_LIBSCI_ACC_DEFAULT_VOLATILE_PRGENV=CRAYCLANG GNU PE_GNU_DEFAULT_FIXED_PKGCONFIG_PATH=/opt/cray/pe/parallel-netcdf/1.12.1.0/GNU/8.2/lib/pkgconfig:/opt/cray/pe/netcdf-hdf5parallel/4.7.4.0/GNU/8.2/lib/pkgconfig:/opt/cray/pe/netcdf/4.7.4.0/GNU/8.2/lib/pkgconfig:/opt/cray/pe/hdf5-parallel/1.12.0.0/GNU/8.2/lib/pkgconfig:/opt/cray/pe/hdf5/1.12.0.0/GNU/8.2/lib/pkgconfig PE_LIBSCI_ACC_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_64=9.0 PE_MPICH_GENCOMPILERS_GNU=8.2 8.1 7.1 SLURM_JOB_USER=jenkg90 CUDA_VISIBLE_DEVICES=0 CRAY_PE_CCE_VARIANT=CC=Clang:FTN=Classic SLURM_NPROCS=4 PE_LIBSCI_DEFAULT_GENCOMPS_INTEL_x86_64=160 CRAY_MPICH2_VER=7.7.15 CRAY_CC_VERSION=10.0.2 PE_GA_DEFAULT_GENCOMPILERS_GNU=8.2 7.3 CUDATOOLKIT_HOME=/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314 PE_PETSC_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_64=9.0 SHLVL=4 GIT_BRANCH=origin/pr/391/merge SLURM_SUBMIT_HOST=daintvm1.cscs.ch PE_TRILINOS_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH:PE_HDF5_PARALLEL:PE_NETCDF_HDF5PARALLEL:PE_LIBSCI:PE_TPSL CRAY_LIBSCI_PREFIX_DIR=/opt/cray/pe/libsci/20.06.1/CRAYCLANG/9.0/x86_64 CRAY_CRAYPE_VERSION=2.7.0 SLURM_JOB_ACCOUNT=g90 ACLOCAL_PATH=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/software/CMake/3.14.5/share/aclocal PE_PAPI_DEFAULT_ACCELL_FAMILY_LIBS= CRAY_CUDATOOLKIT_VERSION=10.2.89_3.28-7.0.2.1_2.17__g52c0314 CRAY_BINUTILS_BIN=/opt/cray/pe/cce/10.0.2/binutils/x86_64/bin MANPATH=/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/doc/man:/opt/cray/pe/mpt/7.7.15/gni/man/mpich:/opt/cray/pe/perftools/20.08.0/man:/opt/cray/pe/papi/6.0.0.2/share/pdoc/man:/opt/cray/pe/atp/3.7.4/share/man:/opt/cray/pe/pmi/5.0.16/man:/opt/cray/pe/libsci/20.06.1/man:/opt/cray/pe/man/csmlversion:/opt/cray/pe/craype/2.7.0/man:/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/share/man:/opt/cray/pe/cce/10.0.2/man:/apps/cscs/daint/share/man:/opt/cray/cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614/man:/opt/cray/pe/modules/3.2.11.4/share/man:/usr/local/man:/usr/share/man:/opt/cray/share/man:/opt/cray/pe/man PE_FFTW_DEFAULT_TARGET_mic_knl=mic_knl PE_MPICH_DEFAULT_GENCOMPS_CRAYCLANG=90 CRAY_FTN_VERSION=10.0.2 PE_TPSL_64_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_64=9.0 PE_PETSC_DEFAULT_GENCOMPS_INTEL_haswell=191 BUILD_TAG=jenkins-g90-DBCSR-580 ALPS_APP_ID=18446744065146783267 MPICH_DIR=/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0 PE_PETSC_DEFAULT_GENCOMPILERS_INTEL_sandybridge=19.1 PE_FFTW_DEFAULT_TARGET_sandybridge=sandybridge ATP_CFLAGS_GNU_FORTRAN=-fno-backtrace MODULEPATH=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/tools/modules/all:/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/modules/all:/opt/cray/pe/perftools/20.08.0/modulefiles:/opt/cray/pe/craype/2.7.0/modulefiles:/apps/daint/modulefiles:/apps/daint/system/modulefiles:/apps/daint/UES/easybuild/modulefiles:/apps/daint/UES/reframe:/apps/common/system/modulefiles:/opt/cray/ari/modulefiles:/opt/cray/pe/modulefiles:/opt/cray/modulefiles:/opt/modulefiles:/opt/cray/craype/default/modulefiles CRAY_MPICH_DIR=/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0 PE_PKGCONFIG_PRODUCTS_DEFAULT=PE_PAPI SLURM_GTIDS=0 NODE_LABELS=g90_daintvm1 LOGNAME=jenkg90 PE_MPICH_DEFAULT_FIXED_PRGENV=INTEL CRAY_PMI_VERSION=5.0.16 CRAY_MPICH_VERSION=7.7.15 PE_MPICH_NV_LIBS_nvidia60=-lcudart CRAY_PRE_COMPILE_OPTS=-hnetwork=aries XDG_RUNTIME_DIR=/run/user/25581 PE_PETSC_DEFAULT_GENCOMPS_INTEL_x86_64=191 MODULE_VERSION_STACK=3.2.11.4 PE_TPSL_DEFAULT_GENCOMPS_INTEL_x86_skylake=190 ghprbSourceBranch=acc-bench-driver EBVERSIONXALT=2.8.10 PE_PETSC_DEFAULT_GENCOMPS_INTEL_x86_skylake=191 ALLINEA_QUEUE_DLL=/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0/lib/libtvmpich.so.3.0.1 PE_TPSL_DEFAULT_GENCOMPILERS_CRAYCLANG_x86_64=9.0 JRE_HOME=/usr/lib64/jvm/java/jre PE_LIBSCI_PKGCONFIG_VARIABLES=PE_LIBSCI_OMP_REQUIRES_@openmp@:PE_SCI_EXT_LIBPATH:PE_SCI_EXT_LIBNAME PE_MPICH_TARGET_VAR_nvidia20=-lcudart PE_MPICH_DEFAULT_VOLATILE_PRGENV=PGI GNU CRAYCLANG CRAY XDG_CONFIG_DIRS=/etc/xdg PATH=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/software/CMake/3.14.5/bin:/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/bin:/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/libnvvp:/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/sbin:/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/bin:/opt/cray/pe/mpt/7.7.15/gni/bin:/opt/cray/pe/perftools/20.08.0/bin:/opt/cray/pe/papi/6.0.0.2/bin:/opt/cray/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari/bin:/opt/cray/pe/craype/2.7.0/bin:/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/bin:/opt/cray/pe/cce/10.0.2/binutils/x86_64/x86_64-pc-linux-gnu/bin:/opt/cray/pe/cce/10.0.2/binutils/cross/x86_64-aarch64/aarch64-linux-gnu/../bin:/opt/cray/pe/cce/10.0.2/utils/x86_64/bin:/apps/cscs/daint/bin:/apps/daint/system/bin:/apps/common/system/bin:/opt/cray/cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614/bin:/opt/cray/pe/modules/3.2.11.4/bin:/users/jenkg90/bin:/usr/local/bin:/usr/bin:/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/opt/cray/pe/bin:/users/jenkg90/.local/bin:/users/jenkg90/bin JAVA_BINDIR=/usr/lib64/jvm/java/bin SLURM_JOB_ID=27166243 PE_TPSL_64_DEFAULT_VOLATILE_PRGENV=CRAYCLANG CRAYCLANG64 GNU GNU64 INTEL INTEL64 ghprbPullAuthorLoginMention=@hfp _LMFILES_=/opt/cray/pe/modulefiles/modules/3.2.11.4:/opt/cray/modulefiles/cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614:/opt/modulefiles/Base-opts/2.4.142-7.0.2.1_2.21__g8f27585.ari:/opt/cray/pe/modulefiles/cce/10.0.2:/opt/cray/pe/craype/2.7.0/modulefiles/craype-network-aries:/opt/cray/pe/modulefiles/craype/2.7.0:/opt/cray/pe/modulefiles/cray-libsci/20.06.1:/opt/cray/pe/modulefiles/pmi/5.0.16:/opt/cray/ari/modulefiles/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari:/opt/cray/pe/modulefiles/atp/3.7.4:/opt/cray/pe/modulefiles/perftools-base/20.08.0:/opt/cray/pe/modulefiles/PrgEnv-cray/6.0.8:/opt/cray/pe/modulefiles/cray-mpich/7.7.15:/opt/modulefiles/slurm/20.02.2-1:/opt/cray/pe/craype/2.7.0/modulefiles/craype-haswell:/apps/daint/UES/easybuild/modulefiles/xalt/2.8.10:/apps/daint/UES/easybuild/modulefiles/daint-gpu:/opt/cray/modulefiles/cudatoolkit/10.2.89_3.28-7.0.2.1_2.17__g52c0314:/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/modules/all/CMake/3.14.5 sha1=origin/pr/391/merge PE_NETCDF_DEFAULT_FIXED_PRGENV=GNU CRAYCLANG PGI INTEL PE_PETSC_DEFAULT_GENCOMPILERS_GNU_haswell=8.2 MODULESHOME=/opt/cray/pe/modules/3.2.11.4 PKG_CONFIG_PATH=/opt/nvidia/cudatoolkit10.2/10.2.89_3.28-7.0.2.1_2.17__g52c0314/lib64/pkgconfig:/opt/cray/rca/2.2.20-7.0.2.1_2.27__g8e3fb5b.ari/lib64/pkgconfig:/opt/cray/pe/pmi/5.0.16/lib64/pkgconfig:/opt/cray/pe/craype/2.7.0/pkg-config:/opt/cray/pe/iobuf/2.0.10/lib/pkgconfig:/opt/cray/pe/fftw/2.1.5.9/lib/pkgconfig:/opt/cray/cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614/lib/pkgconfig:/opt/cray/pe/atp/3.7.4/lib/pkgconfig LIBSCI_BASE_DIR=/opt/cray/pe/libsci/20.06.1 INFOPATH=/opt/cray/cge/3.2.1463_r03f4dfb_fe3.3.0_2019062614/info G_BROKEN_FILENAMES=1 XALT_ETC_DIR=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/etc HISTSIZE=1000 CRAYPE_DIR=/opt/cray/pe/craype/2.7.0 LD_PRELOAD=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/lib64/libxalt_init.so PE_GA_DEFAULT_VOLATILE_PRGENV=GNU CMAKE_LIBRARY_PATH=/apps/daint/UES/xalt/xalt2/software/xalt/2.8.10/lib64 PE_MPICH_GENCOMPS_CRAYCLANG=90 PE_TRILINOS_DEFAULT_GENCOMPILERS_GNU_x86_64=8.2 ghprbTriggerAuthorLoginMention=@codecov[bot] PE_HDF5_PARALLEL_DEFAULT_REQUIRED_PRODUCTS=PE_MPICH PE_MPICH_DIR_CRAYCLANG_DEFAULT64=64 OFFLOAD_INIT=on_start PE_PETSC_DEFAULT_GENCOMPILERS_GNU_x86_64=8.2 PE_PKGCONFIG_DEFAULT_PRODUCTS=PE_TRILINOS:PE_TPSL_64:PE_TPSL:PE_PETSC:PE_PARALLEL_NETCDF:PE_NETCDF_HDF5PARALLEL:PE_NETCDF:PE_MPICH:PE_LIBSCI_ACC:PE_LIBSCI:PE_HDF5_PARALLEL:PE_HDF5:PE_GA:PE_FFTW2:PE_FFTW CPU=x86_64 CRAYPE_VERSION=2.7.0 INCLUDE_PATH_X86_64=/opt/cray/pe/cce/10.0.2/cce-clang/x86_64/lib/clang/10.0.0/include:/opt/cray/pe/cce/10.0.2/cce/x86_64/include/craylibs EBDEVELCMAKE=/apps/daint/UES/jenkins/7.0.UP02/gpu/easybuild/software/CMake/3.14.5/easybuild/CMake-3.14.5-easybuild-devel CRAY_PMI_POST_LINK_OPTS=-L/opt/cray/pe/pmi/5.0.16/lib64 SLURM_LOCALID=0 CRAY_MPICH_PREFIX=/opt/cray/pe/mpt/7.7.15/gni/mpich-crayclang/9.0 JENKINS_SERVER_COOKIE=durable-235859422a60516b78fc407d3a10bc29 CVS_RSH=ssh GPU_DEVICE_ORDINAL=0 LESSOPEN=lessopen.sh %s PE_TPSL_64_DEFAULT_GENCOMPS_INTEL_x86_64=190 CRAYPAT_OPTS_EXECUTABLE=libexec64/opts PE_TPSL_DEFAULT_GENCOMPILERS_CRAYCLANG_haswell=9.0 PE_FFTW_DEFAULT_TARGET_x86_cascadelake=x86_cascadelake PE_LIBSCI_GENCOMPS_GNU_x86_64=81 PE_LIBSCI_DEFAULT_GENCOMPILERS_GNU_x86_64=8.1 PE_TPSL_64_DEFAULT_GENCOMPILERS_INTEL_x86_skylake=19.0 BASH_FUNC_module%%=() { eval `/opt/cray/pe/modules/3.2.11.4/bin/modulecmd bash $*` } _=/usr/bin/env + ulimit -s 256000 + env CTEST_OUTPUT_ON_FAILURE=1 make test 'ARGS=--timeout 900' + tee -a test.out Running tests... Test project /scratch/snx3000/jenkg90/jenkins-g90-DBCSR-580.cray Start 1: dbcsr_perf:inputs/test_H2O.perf 1/20 Test #1: dbcsr_perf:inputs/test_H2O.perf ....................... Passed 4.55 sec Start 2: dbcsr_perf:inputs/test_rect1_dense.perf 2/20 Test #2: dbcsr_perf:inputs/test_rect1_dense.perf ............... Passed 2.60 sec Start 3: dbcsr_perf:inputs/test_rect1_sparse.perf 3/20 Test #3: dbcsr_perf:inputs/test_rect1_sparse.perf .............. Passed 3.37 sec Start 4: dbcsr_perf:inputs/test_rect2_dense.perf 4/20 Test #4: dbcsr_perf:inputs/test_rect2_dense.perf ............... Passed 2.57 sec Start 5: dbcsr_perf:inputs/test_rect2_sparse.perf 5/20 Test #5: dbcsr_perf:inputs/test_rect2_sparse.perf .............. Passed 2.91 sec Start 6: dbcsr_perf:inputs/test_singleblock.perf 6/20 Test #6: dbcsr_perf:inputs/test_singleblock.perf ............... Passed 2.42 sec Start 7: dbcsr_perf:inputs/test_square_dense.perf 7/20 Test #7: dbcsr_perf:inputs/test_square_dense.perf .............. Passed 2.64 sec Start 8: dbcsr_perf:inputs/test_square_sparse.perf 8/20 Test #8: dbcsr_perf:inputs/test_square_sparse.perf ............. Passed 2.70 sec Start 9: dbcsr_perf:inputs/test_square_sparse_bigblocks.perf 9/20 Test #9: dbcsr_perf:inputs/test_square_sparse_bigblocks.perf ... Passed 2.68 sec Start 10: dbcsr_perf:inputs/test_square_sparse_rma.perf 10/20 Test #10: dbcsr_perf:inputs/test_square_sparse_rma.perf ......... Passed 2.83 sec Start 11: dbcsr_unittest1 11/20 Test #11: dbcsr_unittest1 ....................................... Passed 52.65 sec Start 12: dbcsr_unittest2 12/20 Test #12: dbcsr_unittest2 ....................................... Passed 17.13 sec Start 13: dbcsr_unittest3 13/20 Test #13: dbcsr_unittest3 ....................................... Passed 44.41 sec Start 14: dbcsr_tensor_unittest 14/20 Test #14: dbcsr_tensor_unittest .................................***Failed 9.91 sec -------------------------------------------------------------------------------- Testing matrix representations of tensor rank 2 -------------------------------------------------------------------------------- Block sizes: Dim 1: 3 5 1 23 2 3 1 6 3 8 2 3 5 1 Dim 2: 4 2 5 3 1 5 13 5 2 4 5 6 7 2 3 1 2 6 9 12 21 Non-zero blocks: Block 1: ( 1 1 ) Block 2: ( 1 3 ) Block 3: ( 1 11 ) Block 4: ( 2 15 ) Block 5: ( 4 4 ) Block 6: ( 4 17 ) Block 7: ( 7 21 ) Block 8: ( 10 6 ) Block 9: ( 10 9 ) Block 10: ( 10 13 ) Block 11: ( 10 19 ) Block 12: ( 13 7 ) Reference map: ( 1 | 2 ) Test 1: ( 1 | 2 ) Reference distribution: Dist vec 1: 0 0 0 1 0 1 1 0 0 0 1 1 0 1 Dist vec 2: 1 1 1 1 0 0 1 1 1 0 0 1 1 0 0 0 0 0 0 1 0 Test distribution: Dist vec 1: 0 0 0 1 0 1 1 0 0 0 1 1 0 1 Dist vec 2: 1 1 1 1 0 0 1 1 1 0 0 1 1 0 0 0 0 0 0 1 0 Test 1 Test passed! Test 2: ( 2 | 1 ) Reference distribution: Dist vec 1: 0 0 0 1 0 1 1 0 0 0 1 1 0 1 Dist vec 2: 1 1 1 1 0 0 1 1 1 0 0 1 1 0 0 0 0 0 0 1 0 Test distribution: Dist vec 1: 0 0 0 1 0 1 1 0 0 0 1 1 0 1 Dist vec 2: 1 1 1 1 0 0 1 1 1 0 0 1 1 0 0 0 0 0 0 1 0 Test 2 Test passed! -------------------------------------------------------------------------------- Testing matrix representations of tensor rank 3 -------------------------------------------------------------------------------- Block sizes: Dim 1: 3 1 5 2 Dim 2: 1 2 5 3 2 4 Dim 3: 4 2 10 Non-zero blocks: Block 1: ( 1 2 1 ) Block 2: ( 1 2 3 ) Block 3: ( 1 4 3 ) Block 4: ( 2 1 2 ) Block 5: ( 2 1 3 ) Block 6: ( 2 2 2 ) Reference map: ( 1 | 2 3 ) Test 1: ( 1 | 2 3 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 1 Test passed! Test 2: ( 1 2 | 3 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 2 Test passed! Test 3: ( 1 | 3 2 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 3 Test passed! Test 4: ( 1 3 | 2 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 4 Test passed! Test 5: ( 2 | 1 3 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 5 Test passed! Test 6: ( 2 1 | 3 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 6 Test passed! Test 7: ( 2 | 3 1 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 7 Test passed! Test 8: ( 2 3 | 1 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 8 Test passed! Test 9: ( 3 | 2 1 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 9 Test passed! Test 10: ( 3 2 | 1 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 10 Test passed! Test 11: ( 3 | 1 2 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 11 Test passed! Test 12: ( 3 1 | 2 ) Reference distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test distribution: Dist vec 1: 1 1 0 1 Dist vec 2: 1 0 0 1 0 1 Dist vec 3: 0 0 0 Test 12 Test passed! -------------------------------------------------------------------------------- Testing matrix representations of tensor rank 4 -------------------------------------------------------------------------------- Block sizes: Dim 1: 5 9 Dim 2: 6 2 5 12 3 1 7 2 5 17 9 3 4 Dim 3: 2 7 3 8 5 15 1 Dim 4: 12 5 3 Non-zero blocks: Block 1: ( 1 2 1 3 ) Block 2: ( 1 2 4 2 ) Block 3: ( 1 3 6 3 ) Block 4: ( 1 4 3 1 ) Block 5: ( 1 7 1 1 ) Block 6: ( 1 7 4 2 ) Block 7: ( 1 10 2 1 ) Block 8: ( 1 11 5 3 ) Block 9: ( 1 11 7 2 ) Block 10: ( 1 12 3 2 ) Block 11: ( 1 12 3 3 ) Block 12: ( 2 1 1 1 ) Block 13: ( 2 1 4 3 ) Block 14: ( 2 3 7 2 ) Block 15: ( 2 5 6 1 ) Block 16: ( 2 6 4 1 ) Block 17: ( 2 6 5 3 ) Block 18: ( 2 9 2 2 ) Block 19: ( 2 12 3 2 ) Reference map: ( 1 2 | 3 4 ) Test 1: ( 1 | 2 3 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 1 Test passed! Test 2: ( 1 2 | 3 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 2 Test passed! Test 3: ( 1 2 3 | 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 3 Test passed! Test 4: ( 1 | 2 4 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 4 Test passed! Test 5: ( 1 2 | 4 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 5 Test passed! Test 6: ( 1 2 4 | 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 6 Test passed! Test 7: ( 1 | 3 2 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 7 Test passed! Test 8: ( 1 3 | 2 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 8 Test passed! Test 9: ( 1 3 2 | 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 9 Test passed! Test 10: ( 1 | 3 4 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 10 Test passed! Test 11: ( 1 3 | 4 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 11 Test passed! Test 12: ( 1 3 4 | 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 12 Test passed! Test 13: ( 1 | 4 3 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 13 Test passed! Test 14: ( 1 4 | 3 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 14 Test passed! Test 15: ( 1 4 3 | 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 15 Test passed! Test 16: ( 1 | 4 2 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 16 Test passed! Test 17: ( 1 4 | 2 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 17 Test passed! Test 18: ( 1 4 2 | 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 18 Test passed! Test 19: ( 2 | 1 3 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 19 Test passed! Test 20: ( 2 1 | 3 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 20 Test passed! Test 21: ( 2 1 3 | 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 21 Test passed! Test 22: ( 2 | 1 4 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 22 Test passed! Test 23: ( 2 1 | 4 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 23 Test passed! Test 24: ( 2 1 4 | 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 24 Test passed! Test 25: ( 2 | 3 1 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 25 Test passed! Test 26: ( 2 3 | 1 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 26 Test passed! Test 27: ( 2 3 1 | 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 27 Test passed! Test 28: ( 2 | 3 4 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 28 Test passed! Test 29: ( 2 3 | 4 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 29 Test passed! Test 30: ( 2 3 4 | 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 30 Test passed! Test 31: ( 2 | 4 3 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 31 Test passed! Test 32: ( 2 4 | 3 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 32 Test passed! Test 33: ( 2 4 3 | 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 33 Test passed! Test 34: ( 2 | 4 1 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 34 Test passed! Test 35: ( 2 4 | 1 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 35 Test passed! Test 36: ( 2 4 1 | 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 36 Test passed! Test 37: ( 3 | 2 1 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 37 Test passed! Test 38: ( 3 2 | 1 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 38 Test passed! Test 39: ( 3 2 1 | 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 39 Test passed! Test 40: ( 3 | 2 4 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 40 Test passed! Test 41: ( 3 2 | 4 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 41 Test passed! Test 42: ( 3 2 4 | 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 42 Test passed! Test 43: ( 3 | 1 2 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 43 Test passed! Test 44: ( 3 1 | 2 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 44 Test passed! Test 45: ( 3 1 2 | 4 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 45 Test passed! Test 46: ( 3 | 1 4 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 46 Test passed! Test 47: ( 3 1 | 4 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 47 Test passed! Test 48: ( 3 1 4 | 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 48 Test passed! Test 49: ( 3 | 4 1 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 49 Test passed! Test 50: ( 3 4 | 1 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 50 Test passed! Test 51: ( 3 4 1 | 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 51 Test passed! Test 52: ( 3 | 4 2 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 52 Test passed! Test 53: ( 3 4 | 2 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 53 Test passed! Test 54: ( 3 4 2 | 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 54 Test passed! Test 55: ( 4 | 2 3 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 55 Test passed! Test 56: ( 4 2 | 3 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 56 Test passed! Test 57: ( 4 2 3 | 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 57 Test passed! Test 58: ( 4 | 2 1 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 58 Test passed! Test 59: ( 4 2 | 1 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 59 Test passed! Test 60: ( 4 2 1 | 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 60 Test passed! Test 61: ( 4 | 3 2 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 61 Test passed! Test 62: ( 4 3 | 2 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 62 Test passed! Test 63: ( 4 3 2 | 1 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 63 Test passed! Test 64: ( 4 | 3 1 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 64 Test passed! Test 65: ( 4 3 | 1 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 65 Test passed! Test 66: ( 4 3 1 | 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 66 Test passed! Test 67: ( 4 | 1 3 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 67 Test passed! Test 68: ( 4 1 | 3 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 68 Test passed! Test 69: ( 4 1 3 | 2 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 69 Test passed! Test 70: ( 4 | 1 2 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 70 Test passed! Test 71: ( 4 1 | 2 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 71 Test passed! Test 72: ( 4 1 2 | 3 ) Reference distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test distribution: Dist vec 1: 1 0 Dist vec 2: 0 1 0 0 1 0 1 0 1 1 0 0 1 Dist vec 3: 0 0 0 0 0 0 0 Dist vec 4: 0 0 0 Test 72 Test passed! -------------------------------------------------------------------------------- Testing tensor contraction (12|3) x (3|4) = (12|4) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (12|3) x (3|4) = (12|4) -------------------------------------------------------------------------------- GLOBAL INFO OF (12|3) block dimensions: 4 11 9 full dimensions: 25 83 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (12|3) Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 GLOBAL INFO OF (3|4) block dimensions: 9 5 full dimensions: 74 32 process grid dimensions: 2 2 DISTRIBUTION OF (3|4) Number of non-zero blocks: 12 Percentage of non-zero blocks: 26.67 Average number of blocks per CPU: 3 Maximum number of blocks per CPU: 4 Average number of matrix elements per CPU: 194 Maximum number of matrix elements per CPU: 347 INDEX INFO tensor index: (bac) x (cd) = (bad) matrix index: (ba|c) x (c|d) = (ba|d) aligning tensor index with data INDEX INFO tensor index: (bac) x (cd) = (bad) matrix index: (ba|c) x (c|d) = (ba|d) large tensors: 1, 3; small tensor: 2 sorting contraction indices compatibility of (12|3): Normal compatibility of (12|4): Normal No redistribution of (12|3) No redistribution of (12|4) compatibility of (3|4): Normal No redistribution of (3|4) INDEX INFO tensor index: (bac) x (cd) = (bad) matrix index: (ba|c) x (c|d) = (ba|d) -------------------------------------------------------------------------------- DBCSR TAS MATRIX MULTIPLICATION: (12|3) matrix x (3|4) matrix = (12|4) matrix -------------------------------------------------------------------------------- mm dims: 44 9 5 MM PARAMETERS Est. number of matrix elements per CPU of result matrix: 4012 Est. optimal split factor: 4 No redistribution of (12|3) matrix and (12|4) matrix Change split factor of (12|3) matrix : No Change split factor of (12|4) matrix : No mm case: | x + = | SPLIT / PARALLELIZATION INFO splitting rows by factor 4 global grid sizes: 4x 1 grid sizes on subgroups: 1x 1 GLOBAL INFO OF (12|3) matrix block dimensions: 44 9 full dimensions: 2075 74 process grid dimensions: 4 1 GLOBAL INFO OF (3|4) matrix block dimensions: 9 5 full dimensions: 74 32 process grid dimensions: 4 1 GLOBAL INFO OF (12|4) matrix block dimensions: 44 5 full dimensions: 2075 32 process grid dimensions: 4 1 Change process grid: No DISTRIBUTION OF (12|3) matrix Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per group: 8 Maximum number of blocks per group: 13 Average number of matrix elements per group: 4078 Maximum number of matrix elements per group: 5967 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 DISTRIBUTION OF (3|4) matrix replicated Number of non-zero blocks: 48 Percentage of non-zero blocks: 26.67 Average number of blocks per group: 12 Maximum number of blocks per group: 12 Average number of matrix elements per group: 776 Maximum number of matrix elements per group: 776 Average number of blocks per CPU: 12 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 776 Maximum number of matrix elements per CPU: 776 DISTRIBUTION OF (12|4) matrix Number of non-zero blocks: 42 Percentage of non-zero blocks: 19.09 Average number of blocks per group: 11 Maximum number of blocks per group: 17 Average number of matrix elements per group: 4194 Maximum number of matrix elements per group: 8268 Average number of blocks per CPU: 11 Maximum number of blocks per CPU: 17 Average number of matrix elements per CPU: 4194 Maximum number of matrix elements per CPU: 8268 MM PARAMETERS Number of matrix elements per CPU of result matrix: 4012 Optimal split factor: 4 -------------------------------------------------------------------------------- TAS MATRIX MULTIPLICATION DONE -------------------------------------------------------------------------------- GLOBAL INFO OF (12|4) block dimensions: 4 11 5 full dimensions: 25 83 32 process grid dimensions: 2 2 1 DISTRIBUTION OF (12|4) Number of non-zero blocks: 42 Percentage of non-zero blocks: 19.09 Average number of blocks per CPU: 11 Maximum number of blocks per CPU: 17 Average number of matrix elements per CPU: 4194 Maximum number of matrix elements per CPU: 8268 -------------------------------------------------------------------------------- TENSOR CONTRACTION DONE -------------------------------------------------------------------------------- Test passed! 3.55271367880050093E-15 -------------------------------------------------------------------------------- Testing tensor contraction (2|31) x (4|3) = (24|1) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (2|31) x (4|3) = (24|1) -------------------------------------------------------------------------------- GLOBAL INFO OF (2|31) block dimensions: 4 11 9 full dimensions: 25 83 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (2|31) Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 GLOBAL INFO OF (4|3) block dimensions: 9 5 full dimensions: 74 32 process grid dimensions: 2 2 DISTRIBUTION OF (4|3) Number of non-zero blocks: 12 Percentage of non-zero blocks: 26.67 Average number of blocks per CPU: 3 Maximum number of blocks per CPU: 4 Average number of matrix elements per CPU: 194 Maximum number of matrix elements per CPU: 347 INDEX INFO tensor index: (abc) x (cd) = (abd) matrix index: (b|ca) x (d|c) = (bd|a) aligning tensor index with data INDEX INFO tensor index: (bca) x (dc) = (bda) matrix index: (b|ca) x (d|c) = (bd|a) large tensors: 1, 3; small tensor: 2 sorting contraction indices compatibility of (2|31): Not compatible compatibility of (24|1): Not compatible Redistribution of (2|31) Redistribution of (24|1) compatible with (2|31) compatibility of (2|31): Normal compatibility of (24|1): Normal compatibility of (4|3): Transposed No redistribution of (4|3) INDEX INFO tensor index: (bca) x (dc) = (bda) matrix index: (ba|c) x (d|c) = (ba|d) GLOBAL INFO OF (2|31) block dimensions: 11 9 4 full dimensions: 83 74 25 process grid dimensions: 2 1 2 DISTRIBUTION OF (2|31) Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 -------------------------------------------------------------------------------- DBCSR TAS MATRIX MULTIPLICATION: (2|31) matrix x (4|3) matrix = (24|1) matrix -------------------------------------------------------------------------------- mm dims: 44 9 5 MM PARAMETERS Est. number of matrix elements per CPU of result matrix: 4012 Est. optimal split factor: 4 No redistribution of (2|31) matrix and (24|1) matrix Change split factor of (2|31) matrix : No Change split factor of (24|1) matrix : No mm case: | x + = | SPLIT / PARALLELIZATION INFO splitting rows by factor 4 global grid sizes: 4x 1 grid sizes on subgroups: 1x 1 GLOBAL INFO OF (2|31) matrix block dimensions: 44 9 full dimensions: 2075 74 process grid dimensions: 4 1 GLOBAL INFO OF (4|3) matrix block dimensions: 9 5 full dimensions: 74 32 process grid dimensions: 4 1 GLOBAL INFO OF (24|1) matrix block dimensions: 44 5 full dimensions: 2075 32 process grid dimensions: 4 1 Change process grid: No DISTRIBUTION OF (2|31) matrix Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per group: 8 Maximum number of blocks per group: 13 Average number of matrix elements per group: 4078 Maximum number of matrix elements per group: 5967 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 DISTRIBUTION OF (4|3) matrix replicated Number of non-zero blocks: 48 Percentage of non-zero blocks: 26.67 Average number of blocks per group: 12 Maximum number of blocks per group: 12 Average number of matrix elements per group: 776 Maximum number of matrix elements per group: 776 Average number of blocks per CPU: 12 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 776 Maximum number of matrix elements per CPU: 776 DISTRIBUTION OF (24|1) matrix Number of non-zero blocks: 38 Percentage of non-zero blocks: 17.27 Average number of blocks per group: 10 Maximum number of blocks per group: 16 Average number of matrix elements per group: 4012 Maximum number of matrix elements per group: 8220 Average number of blocks per CPU: 10 Maximum number of blocks per CPU: 16 Average number of matrix elements per CPU: 4012 Maximum number of matrix elements per CPU: 8220 MM PARAMETERS Number of matrix elements per CPU of result matrix: 4012 Optimal split factor: 4 -------------------------------------------------------------------------------- TAS MATRIX MULTIPLICATION DONE -------------------------------------------------------------------------------- GLOBAL INFO OF (24|1) block dimensions: 11 5 4 full dimensions: 83 32 25 process grid dimensions: 2 1 2 DISTRIBUTION OF (24|1) Number of non-zero blocks: 38 Percentage of non-zero blocks: 17.27 Average number of blocks per CPU: 10 Maximum number of blocks per CPU: 16 Average number of matrix elements per CPU: 4012 Maximum number of matrix elements per CPU: 8220 GLOBAL INFO OF (24|1) block dimensions: 4 11 5 full dimensions: 25 83 32 process grid dimensions: 2 2 1 DISTRIBUTION OF (24|1) Number of non-zero blocks: 42 Percentage of non-zero blocks: 19.09 Average number of blocks per CPU: 11 Maximum number of blocks per CPU: 17 Average number of matrix elements per CPU: 4194 Maximum number of matrix elements per CPU: 8268 -------------------------------------------------------------------------------- TENSOR CONTRACTION DONE -------------------------------------------------------------------------------- Test passed! 4.44089209850062616E-15 -------------------------------------------------------------------------------- Testing tensor contraction (4|3) x (1|32) = (24|1) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (4|3) x (1|32) = (24|1) -------------------------------------------------------------------------------- GLOBAL INFO OF (4|3) block dimensions: 9 5 full dimensions: 74 32 process grid dimensions: 2 2 DISTRIBUTION OF (4|3) Number of non-zero blocks: 12 Percentage of non-zero blocks: 26.67 Average number of blocks per CPU: 3 Maximum number of blocks per CPU: 4 Average number of matrix elements per CPU: 194 Maximum number of matrix elements per CPU: 347 GLOBAL INFO OF (1|32) block dimensions: 4 11 9 full dimensions: 25 83 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (1|32) Number of non-zero blocks: 30 Percentage of non-zero blocks: 7.58 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 4063 Maximum number of matrix elements per CPU: 5967 INDEX INFO tensor index: (ba) x (cdb) = (cda) matrix index: (a|b) x (c|bd) = (da|c) aligning tensor index with data INDEX INFO tensor index: (ab) x (cbd) = (dac) matrix index: (a|b) x (c|bd) = (da|c) large tensors: 2, 3; small tensor: 1 sorting contraction indices compatibility of (1|32): Not compatible compatibility of (24|1): Not compatible Redistribution of (1|32) Redistribution of (24|1) compatible with (1|32) compatibility of (1|32): Normal compatibility of (24|1): Normal compatibility of (4|3): Normal No redistribution of (4|3) INDEX INFO tensor index: (ab) x (cbd) = (dac) matrix index: (a|b) x (cd|b) = (cd|a) GLOBAL INFO OF (1|32) block dimensions: 4 9 11 full dimensions: 25 74 83 process grid dimensions: 2 1 2 DISTRIBUTION OF (1|32) Number of non-zero blocks: 30 Percentage of non-zero blocks: 7.58 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 4063 Maximum number of matrix elements per CPU: 5967 -------------------------------------------------------------------------------- DBCSR TAS MATRIX MULTIPLICATION: (4|3) matrix x (1|32) matrix = (24|1) matrix -------------------------------------------------------------------------------- mm dims: 5 9 44 MM PARAMETERS Est. number of matrix elements per CPU of result matrix: 4012 Est. optimal split factor: 4 No redistribution of (1|32) matrix and (24|1) matrix Change split factor of (1|32) matrix : No Change split factor of (24|1) matrix : No mm case: + x |T = |T SPLIT / PARALLELIZATION INFO splitting rows by factor 4 global grid sizes: 4x 1 grid sizes on subgroups: 1x 1 GLOBAL INFO OF (4|3) matrix block dimensions: 5 9 full dimensions: 32 74 process grid dimensions: 4 1 GLOBAL INFO OF (1|32) matrix block dimensions: 44 9 full dimensions: 2075 74 process grid dimensions: 4 1 GLOBAL INFO OF (24|1) matrix block dimensions: 44 5 full dimensions: 2075 32 process grid dimensions: 4 1 Change process grid: No DISTRIBUTION OF (4|3) matrix replicated Number of non-zero blocks: 48 Percentage of non-zero blocks: 26.67 Average number of blocks per group: 12 Maximum number of blocks per group: 12 Average number of matrix elements per group: 776 Maximum number of matrix elements per group: 776 Average number of blocks per CPU: 12 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 776 Maximum number of matrix elements per CPU: 776 DISTRIBUTION OF (1|32) matrix Number of non-zero blocks: 30 Percentage of non-zero blocks: 7.58 Average number of blocks per group: 8 Maximum number of blocks per group: 12 Average number of matrix elements per group: 4063 Maximum number of matrix elements per group: 5967 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 4063 Maximum number of matrix elements per CPU: 5967 DISTRIBUTION OF (24|1) matrix Number of non-zero blocks: 38 Percentage of non-zero blocks: 17.27 Average number of blocks per group: 10 Maximum number of blocks per group: 16 Average number of matrix elements per group: 4012 Maximum number of matrix elements per group: 8220 Average number of blocks per CPU: 10 Maximum number of blocks per CPU: 16 Average number of matrix elements per CPU: 4012 Maximum number of matrix elements per CPU: 8220 MM PARAMETERS Number of matrix elements per CPU of result matrix: 4012 Optimal split factor: 4 -------------------------------------------------------------------------------- TAS MATRIX MULTIPLICATION DONE -------------------------------------------------------------------------------- GLOBAL INFO OF (24|1) block dimensions: 11 5 4 full dimensions: 83 32 25 process grid dimensions: 2 1 2 DISTRIBUTION OF (24|1) Number of non-zero blocks: 38 Percentage of non-zero blocks: 17.27 Average number of blocks per CPU: 10 Maximum number of blocks per CPU: 16 Average number of matrix elements per CPU: 4012 Maximum number of matrix elements per CPU: 8220 GLOBAL INFO OF (24|1) block dimensions: 4 11 5 full dimensions: 25 83 32 process grid dimensions: 2 2 1 DISTRIBUTION OF (24|1) Number of non-zero blocks: 42 Percentage of non-zero blocks: 19.09 Average number of blocks per CPU: 11 Maximum number of blocks per CPU: 17 Average number of matrix elements per CPU: 4194 Maximum number of matrix elements per CPU: 8268 -------------------------------------------------------------------------------- TENSOR CONTRACTION DONE -------------------------------------------------------------------------------- Test passed! 1.33226762955018785E-15 -------------------------------------------------------------------------------- Testing tensor contraction (1|24) x (3|4) = (21|3) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (1|24) x (3|4) = (21|3) -------------------------------------------------------------------------------- GLOBAL INFO OF (1|24) block dimensions: 4 11 5 full dimensions: 25 83 32 process grid dimensions: 2 2 1 DISTRIBUTION OF (1|24) Number of non-zero blocks: 3 Percentage of non-zero blocks: 1.36 Average number of blocks per CPU: 1 Maximum number of blocks per CPU: 2 Average number of matrix elements per CPU: 170 Maximum number of matrix elements per CPU: 446 GLOBAL INFO OF (3|4) block dimensions: 9 5 full dimensions: 74 32 process grid dimensions: 2 2 DISTRIBUTION OF (3|4) Number of non-zero blocks: 12 Percentage of non-zero blocks: 26.67 Average number of blocks per CPU: 3 Maximum number of blocks per CPU: 4 Average number of matrix elements per CPU: 194 Maximum number of matrix elements per CPU: 347 INDEX INFO tensor index: (abc) x (dc) = (abd) matrix index: (a|bc) x (d|c) = (ba|d) aligning tensor index with data INDEX INFO tensor index: (abc) x (dc) = (bad) matrix index: (a|bc) x (d|c) = (ba|d) large tensors: 1, 3; small tensor: 2 sorting contraction indices compatibility of (1|24): Not compatible compatibility of (21|3): Normal No redistribution of (21|3) Redistribution of (1|24) compatible with (21|3) compatibility of (1|24): Normal compatibility of (3|4): Transposed No redistribution of (3|4) INDEX INFO tensor index: (abc) x (dc) = (bad) matrix index: (ba|c) x (d|c) = (ba|d) GLOBAL INFO OF (1|24) block dimensions: 4 11 5 full dimensions: 25 83 32 process grid dimensions: 2 2 1 DISTRIBUTION OF (1|24) Number of non-zero blocks: 3 Percentage of non-zero blocks: 1.36 Average number of blocks per CPU: 1 Maximum number of blocks per CPU: 2 Average number of matrix elements per CPU: 170 Maximum number of matrix elements per CPU: 446 -------------------------------------------------------------------------------- DBCSR TAS MATRIX MULTIPLICATION: (1|24) matrix x (3|4) matrix = (21|3) matrix -------------------------------------------------------------------------------- mm dims: 44 5 9 MM PARAMETERS Est. number of matrix elements per CPU of result matrix: 338 Est. optimal split factor: 2 No redistribution of (1|24) matrix and (21|3) matrix Change split factor of (1|24) matrix : No Change split factor of (21|3) matrix : No mm case: | x + = | SPLIT / PARALLELIZATION INFO splitting rows by factor 4 global grid sizes: 4x 1 grid sizes on subgroups: 1x 1 GLOBAL INFO OF (1|24) matrix block dimensions: 44 5 full dimensions: 2075 32 process grid dimensions: 4 1 GLOBAL INFO OF (3|4) matrix block dimensions: 5 9 full dimensions: 32 74 process grid dimensions: 4 1 GLOBAL INFO OF (21|3) matrix block dimensions: 44 9 full dimensions: 2075 74 process grid dimensions: 4 1 Change process grid: No DISTRIBUTION OF (1|24) matrix Number of non-zero blocks: 3 Percentage of non-zero blocks: 1.36 Average number of blocks per group: 1 Maximum number of blocks per group: 2 Average number of matrix elements per group: 170 Maximum number of matrix elements per group: 446 Average number of blocks per CPU: 1 Maximum number of blocks per CPU: 2 Average number of matrix elements per CPU: 170 Maximum number of matrix elements per CPU: 446 DISTRIBUTION OF (3|4) matrix replicated Number of non-zero blocks: 48 Percentage of non-zero blocks: 26.67 Average number of blocks per group: 12 Maximum number of blocks per group: 12 Average number of matrix elements per group: 776 Maximum number of matrix elements per group: 776 Average number of blocks per CPU: 12 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 776 Maximum number of matrix elements per CPU: 776 DISTRIBUTION OF (21|3) matrix Number of non-zero blocks: 38 Percentage of non-zero blocks: 9.60 Average number of blocks per group: 10 Maximum number of blocks per group: 15 Average number of matrix elements per group: 4415 Maximum number of matrix elements per group: 5967 Average number of blocks per CPU: 10 Maximum number of blocks per CPU: 15 Average number of matrix elements per CPU: 4415 Maximum number of matrix elements per CPU: 5967 MM PARAMETERS Number of matrix elements per CPU of result matrix: 338 Optimal split factor: 2 -------------------------------------------------------------------------------- TAS MATRIX MULTIPLICATION DONE -------------------------------------------------------------------------------- GLOBAL INFO OF (21|3) block dimensions: 11 4 9 full dimensions: 83 25 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (21|3) Number of non-zero blocks: 38 Percentage of non-zero blocks: 9.60 Average number of blocks per CPU: 10 Maximum number of blocks per CPU: 15 Average number of matrix elements per CPU: 4415 Maximum number of matrix elements per CPU: 5967 -------------------------------------------------------------------------------- TENSOR CONTRACTION DONE -------------------------------------------------------------------------------- Test passed! 3.33066907387546962E-16 -------------------------------------------------------------------------------- Testing tensor contraction (12|3) x (12|45) = (3|45) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (12|3) x (12|45) = (3|45) -------------------------------------------------------------------------------- GLOBAL INFO OF (12|3) block dimensions: 4 11 9 full dimensions: 25 83 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (12|3) Number of non-zero blocks: 13 Percentage of non-zero blocks: 3.28 Average number of blocks per CPU: 4 Maximum number of blocks per CPU: 8 Average number of matrix elements per CPU: 2214 Maximum number of matrix elements per CPU: 4149 GLOBAL INFO OF (12|45) block dimensions: 4 11 5 3 full dimensions: 25 83 32 28 process grid dimensions: 2 2 1 1 DISTRIBUTION OF (12|45) Number of non-zero blocks: 21 Percentage of non-zero blocks: 3.18 Average number of blocks per CPU: 6 Maximum number of blocks per CPU: 6 Average number of matrix elements per CPU: 34399 Maximum number of matrix elements per CPU: 105984 INDEX INFO tensor index: (cba) x (cbde) = (ade) matrix index: (cb|a) x (cb|de) = (a|de) aligning tensor index with data INDEX INFO tensor index: (cba) x (cbde) = (ade) matrix index: (cb|a) x (cb|de) = (a|de) large tensors: 1, 2; small tensor: 3 sorting contraction indices compatibility of (12|3): Normal compatibility of (12|45): Normal No redistribution of (12|45) No redistribution of (12|3) compatibility of (3|45): Normal No redistribution of (3|45) INDEX INFO tensor index: (cba) x (cbde) = (ade) matrix index: (cb|a) x (cb|de) = (a|de) -------------------------------------------------------------------------------- DBCSR TAS MATRIX MULTIPLICATION: (12|3) matrix x (12|45) matrix = (3|45) matrix -------------------------------------------------------------------------------- mm dims: 9 44 15 MM PARAMETERS Est. number of matrix elements per CPU of result matrix: 2107 Est. optimal split factor: 4 No redistribution of (12|3) matrix and (12|45) matrix Change split factor of (12|3) matrix : No Change split factor of (12|45) matrix : No mm case: |T x | = + SPLIT / PARALLELIZATION INFO splitting rows by factor 4 global grid sizes: 4x 1 grid sizes on subgroups: 1x 1 GLOBAL INFO OF (12|3) matrix block dimensions: 44 9 full dimensions: 2075 74 process grid dimensions: 4 1 GLOBAL INFO OF (12|45) matrix block dimensions: 44 15 full dimensions: 2075 896 process grid dimensions: 4 1 GLOBAL INFO OF (3|45) matrix block dimensions: 9 15 full dimensions: 74 896 process grid dimensions: 4 1 Change process grid: No DISTRIBUTION OF (12|3) matrix Number of non-zero blocks: 13 Percentage of non-zero blocks: 3.28 Average number of blocks per group: 4 Maximum number of blocks per group: 8 Average number of matrix elements per group: 2214 Maximum number of matrix elements per group: 4149 Average number of blocks per CPU: 4 Maximum number of blocks per CPU: 8 Average number of matrix elements per CPU: 2214 Maximum number of matrix elements per CPU: 4149 DISTRIBUTION OF (12|45) matrix Number of non-zero blocks: 21 Percentage of non-zero blocks: 3.18 Average number of blocks per group: 6 Maximum number of blocks per group: 6 Average number of matrix elements per group: 34399 Maximum number of matrix elements per group: 105984 Average number of blocks per CPU: 6 Maximum number of blocks per CPU: 6 Average number of matrix elements per CPU: 34399 Maximum number of matrix elements per CPU: 105984 DISTRIBUTION OF (3|45) matrix replicated Number of non-zero blocks: 14 Percentage of non-zero blocks: 2.59 Average number of blocks per group: 4 Maximum number of blocks per group: 7 Average number of matrix elements per group: 2107 Maximum number of matrix elements per group: 7014 Average number of blocks per CPU: 4 Maximum number of blocks per CPU: 7 Average number of matrix elements per CPU: 2107 Maximum number of matrix elements per CPU: 7014 MM PARAMETERS Number of matrix elements per CPU of result matrix: 1754 Optimal split factor: 4 -------------------------------------------------------------------------------- TAS MATRIX MULTIPLICATION DONE -------------------------------------------------------------------------------- GLOBAL INFO OF (3|45) block dimensions: 9 5 3 full dimensions: 74 32 28 process grid dimensions: 2 2 1 DISTRIBUTION OF (3|45) Number of non-zero blocks: 22 Percentage of non-zero blocks: 16.30 Average number of blocks per CPU: 6 Maximum number of blocks per CPU: 10 Average number of matrix elements per CPU: 2688 Maximum number of matrix elements per CPU: 8304 -------------------------------------------------------------------------------- TENSOR CONTRACTION DONE -------------------------------------------------------------------------------- Test passed! 1.1546319456101628E-14 -------------------------------------------------------------------------------- Testing tensor contraction (3|21) x (12|45) = (3|45) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (3|21) x (12|45) = (3|45) -------------------------------------------------------------------------------- GLOBAL INFO OF (3|21) block dimensions: 4 11 9 full dimensions: 25 83 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (3|21) Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 GLOBAL INFO OF (12|45) block dimensions: 4 11 5 3 full dimensions: 25 83 32 28 process grid dimensions: 2 2 1 1 DISTRIBUTION OF (12|45) Number of non-zero blocks: 36 Percentage of non-zero blocks: 5.45 Average number of blocks per CPU: 9 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 37373 Maximum number of matrix elements per CPU: 109692 INDEX INFO tensor index: (cba) x (cbde) = (ade) matrix index: (a|bc) x (cb|de) = (a|de) aligning tensor index with data INDEX INFO tensor index: (abc) x (cbde) = (ade) matrix index: (a|bc) x (cb|de) = (a|de) large tensors: 1, 2; small tensor: 3 sorting contraction indices compatibility of (3|21): Not compatible compatibility of (12|45): Normal No redistribution of (12|45) Redistribution of (3|21) compatible with (12|45) compatibility of (3|21): Normal compatibility of (3|45): Normal No redistribution of (3|45) INDEX INFO tensor index: (abc) x (cbde) = (ade) matrix index: (cb|a) x (cb|de) = (a|de) GLOBAL INFO OF (3|21) block dimensions: 9 11 4 full dimensions: 74 83 25 process grid dimensions: 1 2 2 DISTRIBUTION OF (3|21) Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 -------------------------------------------------------------------------------- DBCSR TAS MATRIX MULTIPLICATION: (3|21) matrix x (12|45) matrix = (3|45) matrix -------------------------------------------------------------------------------- mm dims: 9 44 15 MM PARAMETERS Est. number of matrix elements per CPU of result matrix: 2635 Est. optimal split factor: 4 No redistribution of (3|21) matrix and (12|45) matrix Change split factor of (3|21) matrix : No Change split factor of (12|45) matrix : No mm case: |T x | = + SPLIT / PARALLELIZATION INFO splitting rows by factor 4 global grid sizes: 4x 1 grid sizes on subgroups: 1x 1 GLOBAL INFO OF (3|21) matrix block dimensions: 44 9 full dimensions: 2075 74 process grid dimensions: 4 1 GLOBAL INFO OF (12|45) matrix block dimensions: 44 15 full dimensions: 2075 896 process grid dimensions: 4 1 GLOBAL INFO OF (3|45) matrix block dimensions: 9 15 full dimensions: 74 896 process grid dimensions: 4 1 Change process grid: No DISTRIBUTION OF (3|21) matrix Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per group: 8 Maximum number of blocks per group: 13 Average number of matrix elements per group: 4078 Maximum number of matrix elements per group: 5967 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 DISTRIBUTION OF (12|45) matrix Number of non-zero blocks: 36 Percentage of non-zero blocks: 5.45 Average number of blocks per group: 9 Maximum number of blocks per group: 12 Average number of matrix elements per group: 37373 Maximum number of matrix elements per group: 109692 Average number of blocks per CPU: 9 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 37373 Maximum number of matrix elements per CPU: 109692 DISTRIBUTION OF (3|45) matrix replicated Number of non-zero blocks: 23 Percentage of non-zero blocks: 4.26 Average number of blocks per group: 6 Maximum number of blocks per group: 9 Average number of matrix elements per group: 2676 Maximum number of matrix elements per group: 8334 Average number of blocks per CPU: 6 Maximum number of blocks per CPU: 9 Average number of matrix elements per CPU: 2676 Maximum number of matrix elements per CPU: 8334 MM PARAMETERS Number of matrix elements per CPU of result matrix: 2084 Optimal split factor: 4 -------------------------------------------------------------------------------- TAS MATRIX MULTIPLICATION DONE -------------------------------------------------------------------------------- GLOBAL INFO OF (3|45) block dimensions: 9 5 3 full dimensions: 74 32 28 process grid dimensions: 2 2 1 DISTRIBUTION OF (3|45) Number of non-zero blocks: 29 Percentage of non-zero blocks: 21.48 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 10 Average number of matrix elements per CPU: 3216 Maximum number of matrix elements per CPU: 8304 -------------------------------------------------------------------------------- TENSOR CONTRACTION DONE -------------------------------------------------------------------------------- Test passed! 2.84217094304040074E-14 -------------------------------------------------------------------------------- Testing tensor contraction (13|2) x (54|21) = (3|45) -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- DBCSR TENSOR CONTRACTION: (13|2) x (54|21) = (3|45) -------------------------------------------------------------------------------- GLOBAL INFO OF (13|2) block dimensions: 4 11 9 full dimensions: 25 83 74 process grid dimensions: 2 2 1 DISTRIBUTION OF (13|2) Number of non-zero blocks: 32 Percentage of non-zero blocks: 8.08 Average number of blocks per CPU: 8 Maximum number of blocks per CPU: 13 Average number of matrix elements per CPU: 4078 Maximum number of matrix elements per CPU: 5967 GLOBAL INFO OF (54|21) block dimensions: 4 11 5 3 full dimensions: 25 83 32 28 process grid dimensions: 2 2 1 1 DISTRIBUTION OF (54|21) Number of non-zero blocks: 36 Percentage of non-zero blocks: 5.45 Average number of blocks per CPU: 9 Maximum number of blocks per CPU: 12 Average number of matrix elements per CPU: 37373 Maximum number of matrix elements per CPU: 109692 INDEX INFO tensor index: (bca) x (bcde) = (ade) matrix index: (ba|c) x (ed|cb) = (a|de) aligning tensor index with data INDEX INFO tensor index: (bac) x (edcb) = (ade) matrix index: (ba|c) x (ed|cb) = (a|de) large tensors: 1, 2; small tensor: 3 sorting contraction indices compatibility of (13|2): Not compatible compatibility of (54|21): Transposed No redistribution of (54|21) Redistribution of (13|2) compatible with (54|21) srun: error: nid03400: tasks 1-3: Segmentation fault srun: Terminating job step 27166243.13 slurmstepd: error: *** STEP 27166243.13 ON nid03400 CANCELLED AT 2020-11-23T13:32:35 *** srun: error: nid03400: task 0: Terminated srun: Force Terminated job step 27166243.13 Start 15: dbcsr_tas_unittest 15/20 Test #15: dbcsr_tas_unittest .................................... Passed 45.87 sec Start 16: dbcsr_test_csr_conversions 16/20 Test #16: dbcsr_test_csr_conversions ............................ Passed 2.64 sec Start 17: libsmm_acc_unittest_multiply 17/20 Test #17: libsmm_acc_unittest_multiply .......................... Passed 651.30 sec Start 18: libsmm_acc_unittest_transpose 18/20 Test #18: libsmm_acc_unittest_transpose ......................... Passed 214.62 sec Start 19: libsmm_acc_timer_multiply-autotuned 19/20 Test #19: libsmm_acc_timer_multiply-autotuned ................... Passed 478.51 sec Start 20: libsmm_acc_timer_multiply-predicted 20/20 Test #20: libsmm_acc_timer_multiply-predicted ................... Passed 530.15 sec 95% tests passed, 1 tests failed out of 20 Total Test time (real) = 2077.56 sec The following tests FAILED: 14 - dbcsr_tensor_unittest (Failed) Errors while running CTest make: *** [Makefile:119: test] Error 8