Skip to content
This repository was archived by the owner on Jul 7, 2019. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
216 commits
Select commit Hold shift + click to select a range
c4a115c
create NMR FireTasks stub
May 31, 2016
09b736b
add triple jump structure relaxation strategy parameters
May 31, 2016
d93cb30
fix typo in ISMEAR
May 31, 2016
dc91100
also set ENCUT in step 1
xhqu1981 May 31, 2016
b3a2939
set EDIFF=1.0E-10 in step 3
xhqu1981 May 31, 2016
0cec9e0
also include K-points in triple jump relaxation settings
xhqu1981 May 31, 2016
276edf9
tune FIRE optimizer parameters in step 3
xhqu1981 May 31, 2016
8db9fbf
translate config file to VaspInputSet
xhqu1981 May 31, 2016
e3f22d3
don't write CHGCAR & WAVCAR in NMR calculations
xhqu1981 May 31, 2016
c954df9
add NMR tensor set
xhqu1981 Jun 1, 2016
b6c6155
add nuclear quadrupole moments for more elements
xhqu1981 Jun 1, 2016
bb05f86
support EFG
Jun 1, 2016
851f149
fix return type bug
Jun 1, 2016
6493653
correct type should be list rather than set
Jun 1, 2016
b6c12f2
finalize snl_to_nmr_spec()
Jun 1, 2016
bc04f02
add options to remove velocities from CONTCAR
xhqu1981 Jun 1, 2016
e6b4222
use Python3 style print
xhqu1981 Jun 1, 2016
5420ba7
add triple jump relaxation to workflows
xhqu1981 Jun 1, 2016
82d6b56
add class NmrVaspToDBTask
xhqu1981 Jun 1, 2016
5413b2a
refactor FW creation to functions
xhqu1981 Jun 2, 2016
2e4e8da
add NMR tensor calculation to workflow
Jun 2, 2016
a37ae90
add NMR workflow to submission framework
Jun 2, 2016
26631a9
use Python3 style print in process_submissions
Jun 2, 2016
bce99f5
conform to PEP8
Jun 2, 2016
cc3649d
use Algo = Fast for triple jump relaxations
xhqu1981 Jun 2, 2016
0ada854
Merge branch 'nmr_wf' of github.com:xhqu1981/MPWorks into nmr_wf
xhqu1981 Jun 2, 2016
83fc651
go through DB authentication only when the user/password pair is present
xhqu1981 Jun 2, 2016
e9fe0e9
use deferred import to fix the import error from pymatgen.io.sets
xhqu1981 Jun 2, 2016
99bcfef
fix the file list type bug
xhqu1981 Jun 2, 2016
7ca4592
fix fw_id bug
xhqu1981 Jun 2, 2016
84ffcd1
pointing to sets_deprecated to avoid the import error
xhqu1981 Jun 2, 2016
6835ca6
fix typo. DictSet is in new style InputSet API
xhqu1981 Jun 2, 2016
ecb2ee6
fix triple jump relax FireWork name
xhqu1981 Jun 2, 2016
71fd3ad
fix FireWorks links dict
xhqu1981 Jun 2, 2016
e6849d1
pymongo find_and_modify() is deprecated, replace with find_one_and_up…
xhqu1981 Jun 2, 2016
98a76f7
fix parameter bugs in find_one_and_update()
xhqu1981 Jun 2, 2016
67a2b61
ensure_index() is deprecated, replace with create_index()
xhqu1981 Jun 2, 2016
20ac71f
reduce structure to primitive cell from the begining therefore reduce…
Jun 3, 2016
b62f9a9
add options to control whether to update vasp in vasp_io_tasks
Jun 3, 2016
fe0293d
fix OUTCAR file name bug
xhqu1981 Jun 3, 2016
4a5c1a4
fix KPAR/NPAR settings
xhqu1981 Jun 3, 2016
aec3a01
fix NMR tensor retriving method
xhqu1981 Jun 3, 2016
5e7c834
use prec=accurate throught the triple jump relaxation
Jun 4, 2016
3114d70
Shyue Ping removed name parameter from DictSet, refactor to adapt thi…
Jun 5, 2016
e4c66dd
change the default input set to the ongoing one in the task
Jun 5, 2016
ecf4205
fix bugs in setting default custodian input set
Jun 5, 2016
96764ab
fix INCAR keyword
Jun 5, 2016
a92752e
use LREAL=Auto for all steps
Jun 5, 2016
2396582
use SetupTask style FireTask scheme in NMR calculations
Jun 6, 2016
b7f5095
fix bugs in FireTask implementation
Jun 6, 2016
7beb21b
fix bugs in input set dict setup
Jun 6, 2016
c2cf7d7
add triple jump relax to mpsnl update_spec
Jun 6, 2016
6d307da
let the NMR tensor depends on DB task such that mpsnl can be updated …
Jun 6, 2016
07c0e21
also set vaspinputset_name
Jun 6, 2016
0d1dc77
update POTCAR strategy
xhqu1981 Jun 6, 2016
d2af2db
use symprec=1.0E-8 for all the triple jump relax and chemical shift c…
xhqu1981 Jun 6, 2016
c83c1bd
also use symprec in EFG calculations
Jun 7, 2016
2004f3d
update chemical shift parsing
Jun 12, 2016
082d811
setup MOVE_TO_GARDEN for nmr workflows
Jun 12, 2016
d025ff4
add support for job packing
Jun 12, 2016
0d075a8
prefix GARDEN variables with module name
Jun 12, 2016
9dd67ac
fix nprocs per node
Jun 12, 2016
5add4ae
refactor GARDEN variable to singleton to unable dynamic adjustment
Jun 12, 2016
60094df
fix typos in accessing WFSettings
Jun 12, 2016
94a615a
add TripleJumpRelaxVaspToDBTask to setup NMR only Garden locations
Jun 13, 2016
966b722
fix return value bug
Jun 13, 2016
87b20ca
fix parameter typo
Jun 13, 2016
1177cc7
fix mpirun name
xhqu1981 Jun 22, 2016
282be4a
also specify number of node for srun
xhqu1981 Jul 1, 2016
ad8e9a3
fix nodes and ranks flag
xhqu1981 Jul 1, 2016
524ea16
Don't honor the SLURM_NTASKS in case of job packing, Because SLURM_NT…
Jul 8, 2016
2c6a11d
conform to PEP8
Jul 8, 2016
29e9491
use ISYM=0 instead of SYMPREC
Jul 18, 2016
52e030e
use tighter tolerance in SNLGroup search for NMR
Jul 18, 2016
eee23dc
use the same settings in NMR tensor calculations
Jul 18, 2016
d6ccbd7
don't use default input set in NMR VaspJob setup
xhqu1981 Jul 20, 2016
0782eb2
Merge branch 'master' into nmr_wf
xhqu1981 Jul 22, 2016
3064013
use set operation to simplify the logic of task type
xhqu1981 Jul 22, 2016
a4294f1
change requirements back to pmg4+
xhqu1981 Jul 22, 2016
b952aa7
Merge remote-tracking branch 'upstream/pmg4' into nmr_wf
xhqu1981 Jul 22, 2016
2961b0c
Merge branch 'master' into nmr_wf
Jul 23, 2016
2bd185f
change all the print to Python3 style
Jul 25, 2016
05381c0
explicitly use UTF-16 for README.rst
xhqu1981 Jul 25, 2016
19a28b4
fix typo
xhqu1981 Jul 25, 2016
1a218b3
don't use encoding parameter in Python2
xhqu1981 Jul 25, 2016
1be2585
replace basestring with str in Python3
xhqu1981 Jul 25, 2016
94f0e5d
fix indent bug
xhqu1981 Jul 25, 2016
d8e6fb4
make iterate dict items Python3 compatible
Jul 26, 2016
a4d8f34
explicity using text mode for zopen to be compatible with Python3 for…
xhqu1981 Jul 27, 2016
3b4d777
add NMR dir to RUN_LOCS
xhqu1981 Aug 26, 2016
a7f37b5
change six package version requirement to 1.10.0 to be consistent wit…
xhqu1981 Aug 26, 2016
c667f0a
Merge branch 'master' into nmr_wf
xhqu1981 Jan 20, 2017
6b232e5
use NCORE instead of NPAR/KPAR since it works for both geometry optim…
xhqu1981 Jan 21, 2017
1f98b27
revert NPAR/KPAR settings
Jan 23, 2017
490c4d4
convert all indents in snl_mongo.py to spaces
xhqu1981 Jan 23, 2017
9f1a781
Merge branch 'nmr_wf' of github.com:xhqu1981/MPWorks into nmr_wf
xhqu1981 Jan 23, 2017
4eb1863
fixed indentation
xhqu1981 Jan 23, 2017
81f29e4
fix indentatioin in process_submission
xhqu1981 Jan 23, 2017
d4fef55
fix indentation in mp_vaspdrone
xhqu1981 Jan 23, 2017
c3cb813
fix indentation in check_snl/builders/base.py
xhqu1981 Jan 23, 2017
1d89a67
fix_bs_controller_tasks
xhqu1981 Jan 23, 2017
4978b95
fix indentation in fix_mpcomplete
xhqu1981 Jan 23, 2017
84a40ea
fix bug in determining handlers choice
xhqu1981 Jan 23, 2017
a250ab1
fix type bug in task_type comparison
xhqu1981 Jan 23, 2017
91c3d31
convert to list to force the in-place decoding of the Handlers objects
xhqu1981 Jan 24, 2017
d6d0ec2
fix all the calls to map
xhqu1981 Jan 24, 2017
eecfb36
add support for multiple VASP binary attempt
Jan 24, 2017
2333ec0
add options to choose whether start the attempts from initial input file
xhqu1981 Jan 24, 2017
4d4200f
use explicit integer division operator to be py3 compatible
xhqu1981 Jan 25, 2017
39a1237
ensure srun is running with -v option
Jan 26, 2017
0e0010f
fix multiple binary logic, propogate CustodianError exception
Jan 26, 2017
2d70cb7
set exception in case that attempts continue with other binaries
Jan 27, 2017
67aebce
catch all general Exception from custodian in case that the attempts …
Jan 27, 2017
e518f6b
use lazy connection for MongoClient
xhqu1981 Feb 3, 2017
f170824
use the share LaunchPad in case of job packing
xhqu1981 Feb 3, 2017
a59f51e
handle run location in the period of CSCRATCH transition, in which th…
Mar 24, 2017
c26af05
change K-points in NMR workflow to full automatic
Mar 25, 2017
b1b1f4a
reduce number of K-points
Mar 25, 2017
1e5edc8
tighten structure compare by 10 times for NMR calculation
Mar 26, 2017
ed4002d
add mechanism to avoid infinite loop in spawning dynamic FW
Mar 26, 2017
fd04620
if the structure is relaxed for more than 3 step while StructureMatch…
Mar 26, 2017
d086b13
fix input rewind logic
Mar 26, 2017
21f2928
always update geometry
Mar 26, 2017
ac54701
add missing target file
Mar 26, 2017
f0e1f3c
tighten the structure matcher criteria even futher by 10 times.
Mar 27, 2017
46bc93b
check the validation of CONTCAR before replacing POSCAR
xhqu1981 Mar 27, 2017
b208e1d
back custodian.json for every VASP binary run
xhqu1981 Mar 27, 2017
08c4dcc
add special for dynamic workflows in case of triple jump step 3
xhqu1981 Mar 28, 2017
e033d35
always use LaunchPad directly rather than through the DataServer
xhqu1981 Mar 28, 2017
2355621
change dynamic step FIRE optimizer parameters to 0.1 times of default…
xhqu1981 Mar 28, 2017
7232c68
fix fw_name
Mar 29, 2017
cf01498
remove old file before starting new calculations to avoid confuse cus…
Mar 29, 2017
4e80d95
fix "TypeError: a bytes-like object is required, not 'int'" in VaspCo…
Mar 29, 2017
21b62d1
fix super class call
xhqu1981 Mar 30, 2017
62671cf
just copy the parent FWAction
xhqu1981 Mar 30, 2017
13a64ad
support SCAN functional
xhqu1981 Mar 31, 2017
4cf9813
fix typo
xhqu1981 Mar 31, 2017
f44d89b
fix potcar setting in case of SCAN functional
xhqu1981 Mar 31, 2017
d998871
use valence dependent pseudopotential
xhqu1981 Mar 31, 2017
924c03f
Put the larger ENMAX specie first, fix the "PSMAXN for non-local pote…
Apr 1, 2017
c7b618a
don't sort structure again by DictSet
Apr 1, 2017
f040c22
update FIRE optimizer parameters
Apr 1, 2017
055a9e4
use the same settings for EFG and Chemical Shift
xhqu1981 Apr 3, 2017
a6df500
Merge branch 'mb_nmr_wf' of github.com:xhqu1981/MPWorks into mb_nmr_wf
xhqu1981 Apr 3, 2017
8f8d079
fix typo
xhqu1981 Apr 3, 2017
fa1373c
remove unnecessary keywords for EFG
Apr 4, 2017
17f3d12
also consider ENMIN
xhqu1981 Apr 4, 2017
6a619ac
further tighten NMR structure matcher threshold
xhqu1981 Apr 5, 2017
e7ac401
update NMR calculation parameters
xhqu1981 Apr 10, 2017
2c9056e
revert NMR parameter changes
xhqu1981 Apr 11, 2017
93773f3
also don't use ADDGRID for EFG
xhqu1981 Apr 11, 2017
bae353f
reduce max ENCUT to 30% more than ENMAX
xhqu1981 Apr 11, 2017
a841396
reduce ENCUT to 5% more than ENMAX in the first step of triple jump
xhqu1981 Apr 11, 2017
ee4ff43
update POTCAR choice
xhqu1981 Apr 11, 2017
626c84e
update POTCAR choice for Ca
xhqu1981 Apr 11, 2017
8387014
Merge branch 'mb_nmr_wf' of github.com:xhqu1981/MPWorks into mb_nmr_wf
Apr 12, 2017
5ba2efd
remove uneccessary keywords in EFG INCAR
Apr 12, 2017
a7eefb6
also delete custodian.json
Apr 16, 2017
fe881cf
use 50 atoms as threshold to define as large cell in NMR workflow
Apr 16, 2017
6931948
use fresh error handlers to reset error_count for every custodian run
Apr 23, 2017
34a56c7
make a deep copy to make sure the new instance is independent
Apr 23, 2017
409c317
use StdErrHandler to deal with out of memory error
xhqu1981 Apr 24, 2017
a271139
Merge remote-tracking branch 'origin/mb_nmr_wf' into mb_nmr_wf
xhqu1981 Apr 24, 2017
f14b643
large molecules use large NPAR since it will be run with more CPU cores
xhqu1981 Apr 25, 2017
8677630
update NPAR/KPAR setting
xhqu1981 Apr 27, 2017
98546f7
also backup OUTCAR and vasp.out to binary run file
xhqu1981 Apr 28, 2017
09aec02
also back up INCAR POSCAR and KPOINTS to binary.#.tar.gz
Apr 29, 2017
03e3d64
always use ISMEAR=0 for chemical shift calculation
xhqu1981 May 3, 2017
1888ea8
Merge remote-tracking branch 'origin/mb_nmr_wf' into mb_nmr_wf
xhqu1981 May 3, 2017
a2db1af
add stub for chemical shift k-point average dynamic workflow
May 4, 2017
ca4da53
Manual chemical shift k-points average SCF part
May 4, 2017
c9c1751
The chemical shift from ISMEAR=0 and ISMEAR=-5 are consistent, no nee…
xhqu1981 May 4, 2017
a9221f8
add FireTask classes for manual K-points averaging
xhqu1981 May 4, 2017
8ef207c
also print the checksum as file name
xhqu1981 May 4, 2017
77bf9b7
fix file name
xhqu1981 May 4, 2017
76ea1f1
update function call
xhqu1981 May 4, 2017
c105b98
finish SCF dynamic workflow
xhqu1981 May 4, 2017
834daa3
fix spec typo
xhqu1981 May 4, 2017
3bf3317
fix fw_name
xhqu1981 May 4, 2017
3eae287
delete prev_vasp_dir from new job spec
xhqu1981 May 4, 2017
731ff03
set prev_task_type and prev_vasp_dir
xhqu1981 May 4, 2017
637012b
finished K points average generation
xhqu1981 May 4, 2017
e5cdba6
fix handler specification
xhqu1981 May 4, 2017
4ea7d19
fix INCAR setting
xhqu1981 May 4, 2017
66234c6
fix fwid
xhqu1981 May 4, 2017
5031a25
finish collect k-points average task
xhqu1981 May 4, 2017
4825d00
finish collect k-points average final db insertion
xhqu1981 May 5, 2017
e2e0729
file KPOINTS file name
May 5, 2017
6efe6ed
first check the existence of std_err.txt
May 5, 2017
ff1fc79
fix error message detect
xhqu1981 May 5, 2017
9779620
use KPAR=1 for single k-point calculations
xhqu1981 May 5, 2017
ec7b419
file NMR SCF job file path
May 18, 2017
44f2ddc
fix typo
May 18, 2017
4ed2028
add option to use environment variable GARDEN_LOC to customize the lo…
xhqu1981 May 19, 2017
4f6624c
merge run_tags to query in DupeFinder to be consistent with Kiran's c…
xhqu1981 May 22, 2017
2d335e4
fix type error
May 29, 2017
d1e5f14
fix DB insertion
May 29, 2017
3aed2f8
fix file name
May 29, 2017
ad61159
fix tweak of fw_spec
May 29, 2017
104fb0f
fix data formating
May 29, 2017
5079aa6
fix data type
May 29, 2017
a7c2767
add F pseudopotential choice
xhqu1981 Jun 6, 2017
65c8d28
only assign valence for materials having valence specific pseudopoten…
xhqu1981 Jun 6, 2017
bd9e12e
Merge remote-tracking branch 'origin/mb_nmr_wf' into mb_nmr_wf
xhqu1981 Jun 6, 2017
e844efe
let alternative command bypass the default command
xhqu1981 Jun 6, 2017
c6e40ad
garantee the elements has been assigned pawpot explicitly
xhqu1981 Jun 7, 2017
bf30519
remove redundant print
xhqu1981 Jun 7, 2017
a73c50e
change assert to exception to provide the element name
xhqu1981 Jun 7, 2017
b5aada3
fix typo
xhqu1981 Jun 7, 2017
6287714
add more elements to pawpot
xhqu1981 Jun 7, 2017
0fe81ab
reder nuclear quadruplar moments after the atoms were reordered
Jul 4, 2017
1d5f777
make a private method
Jul 4, 2017
fbf32f8
fix a bug of copied instance
Jul 4, 2017
6dcee6b
update pawpot choice
xhqu1981 Aug 30, 2017
ebd10c4
Merge remote-tracking branch 'origin/mb_nmr_wf' into mb_nmr_wf
xhqu1981 Aug 30, 2017
b6be7cb
add pawpot for B, In, Pb
Oct 20, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified README.rst
Binary file not shown.
47 changes: 26 additions & 21 deletions mpworks/check_snl/builders/base.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
import sys, multiprocessing, time
from mpworks.snl_utils.mpsnl import SNLGroup
import multiprocessing
import sys
import time

from init_plotly import py, stream_ids, categories
from matgendb.builders.core import Builder
from matgendb.builders.util import get_builder_log
from mpworks.check_snl.utils import div_plus_mod
from pymatgen.analysis.structure_matcher import StructureMatcher, ElementComparator
from init_plotly import py, stream_ids, categories

from mpworks.check_snl.utils import div_plus_mod
from mpworks.snl_utils.mpsnl import SNLGroup

if py is not None:
from plotly.graph_objs import *

Expand Down Expand Up @@ -66,7 +71,7 @@ def get_items(self, snls=None, snlgroups=None, ncols=None):
return self._snls.query(distinct_key='snl_id')

def process_item(self, item, index):
nrow, ncol = index/self._ncols, index%self._ncols
nrow, ncol = index//self._ncols, index%self._ncols
snlgroups = {} # keep {snlgroup_id: SNLGroup} to avoid dupe queries
if isinstance(item, dict) and 'snlgroup_ids' in item:
for gid in item['snlgroup_ids']:
Expand All @@ -83,27 +88,27 @@ def _push_to_plotly(self):
heatmap_z = self._counter._getvalue() if not self._seq else self._counter
bar_x = self._mismatch_counter._getvalue() if not self._seq else self._mismatch_counter
md = self._mismatch_dict._getvalue() if not self._seq else self._mismatch_dict
try:
self._streams[0].write(Heatmap(z=heatmap_z))
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
_log.info('%r %r', exc_type, exc_value)
_log.info('_push_to_plotly ERROR: heatmap=%r', heatmap_z)
try:
self._streams[1].write(Bar(x=bar_x))
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
_log.info('%r %r', exc_type, exc_value)
_log.info('_push_to_plotly ERROR: bar=%r', bar_x)
for k,v in md.iteritems():
try:
self._streams[0].write(Heatmap(z=heatmap_z))
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
_log.info('%r %r', exc_type, exc_value)
_log.info('_push_to_plotly ERROR: heatmap=%r', heatmap_z)
try:
self._streams[1].write(Bar(x=bar_x))
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
_log.info('%r %r', exc_type, exc_value)
_log.info('_push_to_plotly ERROR: bar=%r', bar_x)
for k, v in md.items():
if len(v) < 1: continue
try:
self._streams[2].write(Scatter(
x=self._mismatch_counter[categories[self.checker_name].index(k)],
y=k, text='<br>'.join(v)
))
_log.info('_push_to_plotly: mismatch_dict[%r]=%r', k, v)
self._mismatch_dict.update({k:[]}) # clean
self._mismatch_dict.update({k: []}) # clean
time.sleep(0.052)
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
Expand All @@ -122,7 +127,7 @@ def _increase_counter(self, nrow, ncol, mismatch_dict):
for k in categories[self.checker_name]:
mc[categories[self.checker_name].index(k)] += len(mismatch_dict[k])
self._mismatch_counter = mc
for k,v in mismatch_dict.iteritems():
for k,v in mismatch_dict.items():
self._mismatch_dict[k] += v
currow = self._counter[nrow]
currow[ncol] += 1
Expand All @@ -136,7 +141,7 @@ def _increase_counter(self, nrow, ncol, mismatch_dict):
if self._lock is not None: self._lock.release()

def finalize(self, errors):
if py is not None: self._push_to_plotly()
if py is not None: self._push_to_plotly()
_log.info("%d items processed.", self._counter_total.value)
return True

Expand Down
2 changes: 1 addition & 1 deletion mpworks/check_snl/builders/init_plotly.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,4 +90,4 @@
fig['layout'] = layout
py.plot(fig, filename='builder_stream', auto_open=False)
else:
print 'plotly ImportError'
print('plotly ImportError')
46 changes: 23 additions & 23 deletions mpworks/check_snl/check_snl.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,16 @@
)

num_ids_per_stream = 20000
num_ids_per_stream_k = num_ids_per_stream/1000
num_ids_per_stream_k = num_ids_per_stream//1000
num_snls = sma.snl.count()
num_snlgroups = sma.snlgroups.count()
num_pairs_per_job = 1000 * num_ids_per_stream
num_pairs_max = num_snlgroups*(num_snlgroups-1)/2
num_pairs_max = num_snlgroups*(num_snlgroups-1)//2

num_snl_streams = div_plus_mod(num_snls, num_ids_per_stream)
num_snlgroup_streams = div_plus_mod(num_snlgroups, num_ids_per_stream)
num_jobs = div_plus_mod(num_pairs_max, num_pairs_per_job)
print num_snl_streams, num_snlgroup_streams, num_jobs
print(num_snl_streams, num_snlgroup_streams, num_jobs)

checks = ['spacegroups', 'groupmembers', 'canonicals']
categories = [ 'SG Change', 'SG Default', 'PybTeX', 'Others' ]
Expand Down Expand Up @@ -113,7 +113,7 @@ def __iter__(self):
def _get_initial_pair(self, job_id):
N, J, M = num_snlgroups, job_id, num_pairs_per_job
i = int(N+.5-sqrt(N*(N-1)+.25-2*J*M))
j = J*M-(i-1)*(2*N-i)/2+i+1
j = J*M-(i-1)*(2*N-i)//2+i+1
return Pair(i,j)
def next(self):
if self.num_pairs > num_pairs_per_job:
Expand Down Expand Up @@ -202,7 +202,7 @@ def init_plotly(args):

def check_snl_spacegroups(args):
"""check spacegroups of all available SNLs"""
range_index = args.start / num_ids_per_stream
range_index = args.start // num_ids_per_stream
idxs = [range_index*2]
idxs += [idxs[0]+1]
s = [py.Stream(stream_ids[i]) for i in idxs]
Expand Down Expand Up @@ -242,7 +242,7 @@ def check_snl_spacegroups(args):

def check_snls_in_snlgroups(args):
"""check whether SNLs in each SNLGroup still match resp. canonical SNL"""
range_index = args.start / num_ids_per_stream
range_index = args.start // num_ids_per_stream
idxs = [2*(num_snl_streams+range_index)]
idxs += [idxs[0]+1]
s = [py.Stream(stream_ids[i]) for i in idxs]
Expand Down Expand Up @@ -314,7 +314,7 @@ def analyze(args):
if args.t:
if args.fig_id == 42:
label_entries = filter(None, '<br>'.join(fig['data'][2]['text']).split('<br>'))
pairs = map(make_tuple, label_entries)
pairs = list(map(make_tuple, label_entries))
grps = set(chain.from_iterable(pairs))
snlgrp_cursor = sma.snlgroups.aggregate([
{ '$match': {
Expand All @@ -326,7 +326,7 @@ def analyze(args):
snlgroup_keys = {}
for d in snlgrp_cursor:
snlgroup_keys[d['snlgroup_id']] = d['canonical_snl']['snlgroup_key']
print snlgroup_keys[40890]
print(snlgroup_keys[40890])
sma2 = SNLMongoAdapter.from_file(
os.path.join(os.environ['DB_LOC'], 'materials_db.yaml')
)
Expand All @@ -353,7 +353,7 @@ def analyze(args):
'band_gap': band_gap, 'task_id': material['task_id'],
'volume_per_atom': volume_per_atom
}
print snlgroup_data[40890]
print(snlgroup_data[40890])
filestem = 'mpworks/check_snl/results/bad_snlgroups_2_'
with open(filestem+'in_matdb.csv', 'wb') as f, \
open(filestem+'notin_matdb.csv', 'wb') as g:
Expand Down Expand Up @@ -402,7 +402,7 @@ def analyze(args):
rms_dist = matcher.get_rms_dist(primary_structure, secondary_structure)
if rms_dist is not None:
rms_dist_str = "({0:.3g},{1:.3g})".format(*rms_dist)
print rms_dist_str
print(rms_dist_str)
row = [
category, composition,
primary_id, primary_sg_num,
Expand All @@ -420,13 +420,13 @@ def analyze(args):
out_fig = Figure()
badsnls_trace = Scatter(x=[], y=[], text=[], mode='markers', name='SG Changes')
bisectrix = Scatter(x=[0,230], y=[0,230], mode='lines', name='bisectrix')
print 'pulling bad snls from plotly ...'
print('pulling bad snls from plotly ...')
bad_snls = OrderedDict()
for category, text in zip(fig['data'][2]['y'], fig['data'][2]['text']):
for snl_id in map(int, text.split('<br>')):
bad_snls[snl_id] = category
with open('mpworks/check_snl/results/bad_snls.csv', 'wb') as f:
print 'pulling bad snls from database ...'
print('pulling bad snls from database ...')
mpsnl_cursor = sma.snl.find({
'snl_id': { '$in': bad_snls.keys() },
'about.projects': {'$ne': 'CederDahn Challenge'}
Expand All @@ -435,7 +435,7 @@ def analyze(args):
writer.writerow([
'snl_id', 'category', 'snlgroup_key', 'nsites', 'remarks', 'projects', 'authors'
])
print 'writing bad snls to file ...'
print('writing bad snls to file ...')
for mpsnl_dict in mpsnl_cursor:
mpsnl = MPStructureNL.from_dict(mpsnl_dict)
row = [ mpsnl.snl_id, bad_snls[mpsnl.snl_id], mpsnl.snlgroup_key ]
Expand All @@ -450,8 +450,8 @@ def analyze(args):
badsnls_trace['y'].append(sf.get_spacegroup_number())
badsnls_trace['text'].append(mpsnl.snl_id)
if bad_snls[mpsnl.snl_id] == 'SG default':
print sg_num, sf.get_spacegroup_number()
print 'plotting out-fig ...'
print(sg_num, sf.get_spacegroup_number())
print('plotting out-fig ...')
out_fig['data'] = Data([bisectrix, badsnls_trace])
out_fig['layout'] = Layout(
showlegend=False, hovermode='closest',
Expand All @@ -467,23 +467,23 @@ def analyze(args):
ltol=0.2, stol=0.3, angle_tol=5, primitive_cell=False, scale=True,
attempt_supercell=True, comparator=ElementComparator()
)
print 'pulling data from plotly ...'
print('pulling data from plotly ...')
trace = Scatter(x=[], y=[], text=[], mode='markers', name='mismatches')
bad_snls = OrderedDict() # snlgroup_id : [ mismatching snl_ids ]
for category, text in zip(fig['data'][2]['y'], fig['data'][2]['text']):
if category != 'mismatch': continue
for entry in text.split('<br>'):
fields = entry.split(':')
snlgroup_id = int(fields[0].split(',')[0])
print snlgroup_id
print(snlgroup_id)
snlgrp_dict = sma.snlgroups.find_one({ 'snlgroup_id': snlgroup_id })
snlgrp = SNLGroup.from_dict(snlgrp_dict)
s1 = snlgrp.canonical_structure.get_primitive_structure()
bad_snls[snlgroup_id] = []
for i, snl_id in enumerate(fields[1].split(',')):
mpsnl_dict = sma.snl.find_one({ 'snl_id': int(snl_id) })
if 'CederDahn Challenge' in mpsnl_dict['about']['projects']:
print 'skip CederDahn: %s' % snl_id
print('skip CederDahn: %s' % snl_id)
continue
mpsnl = MPStructureNL.from_dict(mpsnl_dict)
s2 = mpsnl.structure.get_primitive_structure()
Expand All @@ -496,21 +496,21 @@ def analyze(args):
if len(bad_snls[snlgroup_id]) < 1:
bad_snls.pop(snlgroup_id, None)
with open('mpworks/check_snl/results/bad_snlgroups.csv', 'wb') as f:
print 'pulling bad snlgroups from database ...'
print('pulling bad snlgroups from database ...')
snlgroup_cursor = sma.snlgroups.find({
'snlgroup_id': { '$in': bad_snls.keys() },
})
writer = csv.writer(f)
writer.writerow(['snlgroup_id', 'snlgroup_key', 'mismatching snl_ids'])
print 'writing bad snlgroups to file ...'
print('writing bad snlgroups to file ...')
for snlgroup_dict in snlgroup_cursor:
snlgroup = SNLGroup.from_dict(snlgroup_dict)
row = [
snlgroup.snlgroup_id, snlgroup.canonical_snl.snlgroup_key,
' '.join(bad_snls[snlgroup.snlgroup_id])
]
writer.writerow(row)
print 'plotting out-fig ...'
print('plotting out-fig ...')
out_fig = Figure()
out_fig['data'] = Data([trace])
out_fig['layout'] = Layout(
Expand Down Expand Up @@ -544,11 +544,11 @@ def analyze(args):
snlgroup_id = start_id + d['x'][idx]
mismatch_snl_id, canonical_snl_id = d['text'][idx].split(' != ')
bad_snlgroups[snlgroup_id] = int(mismatch_snl_id)
print errors
print(errors)
fig_data = fig['data'][-1]
fig_data['x'] = [ errors[color] for color in fig_data['marker']['color'] ]
filename = _get_filename()
print filename
print(filename)
#py.plot(fig, filename=filename)
with open('mpworks/check_snl/results/bad_snls.csv', 'wb') as f:
mpsnl_cursor = sma.snl.find({ 'snl_id': { '$in': bad_snls.keys() } })
Expand Down
4 changes: 2 additions & 2 deletions mpworks/check_snl/icsd.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
for category, text in zip(fig['data'][2]['y'], fig['data'][2]['text']):
for line in text.split('<br>'):
before_colon, after_colon = line.split(':')
snlgroup1, snlgroup2 = map(int, before_colon[1:-1].split(','))
snlgroup1, snlgroup2 = list(map(int, before_colon[1:-1].split(',')))
snls, icsd_matches = after_colon.split('->')
snl1, snl2 = map(int, snls[2:-2].split(','))
snl1, snl2 = list(map(int, snls[2:-2].split(',')))
icsd, matches = icsd_matches.strip().split(' ')
writer.writerow([snlgroup1, snlgroup2, snl1, snl2, int(icsd), matches[1:-1]])
10 changes: 5 additions & 5 deletions mpworks/check_snl/scripts/sg_changes_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ def _get_mp_link(mp_id):
fig = py.get_figure('tschaume',11)
df = DataFrame.from_dict(fig['data'][1]).filter(['x','y','text'])
grouped_x = df.groupby('x')
print '|==============================='
print '| old SG | close to bisectrix | far from bisectrix'
print('|===============================')
print('| old SG | close to bisectrix | far from bisectrix')
for n,g in grouped_x:
if g.shape[0] < 2: continue # at least two entries at same old SG
grouped_y = g.groupby('y')
Expand All @@ -50,9 +50,9 @@ def _get_mp_link(mp_id):
if ratios[0] > 0.2 or ratios[1] < 0.8: continue
snlgroup_ids = _get_snlgroup_id(first['text']), _get_snlgroup_id(last['text'])
mp_ids = _get_mp_id(snlgroup_ids[0]), _get_mp_id(snlgroup_ids[1])
print '| %d | %d (%d) -> %d -> %s | %d (%d) -> %d -> %s' % (
print('| %d | %d (%d) -> %d -> %s | %d (%d) -> %d -> %s' % (
first['x'],
first['text'], first['y'], snlgroup_ids[0], _get_mp_link(mp_ids[0]),
last['text'], last['y'], snlgroup_ids[1], _get_mp_link(mp_ids[1])
)
print '|==============================='
))
print('|===============================')
2 changes: 1 addition & 1 deletion mpworks/check_snl/scripts/sg_default_bad_snls_check.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,4 +119,4 @@
nonvalid_snlids.append(snl['snl_id'])
else:
valid_snlids.append(snl['snl_id'])
print len(valid_snlids), len(nonvalid_snlids)
print(len(valid_snlids), len(nonvalid_snlids))
Loading