-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathinterdependency_analysis.py
More file actions
1596 lines (1430 loc) · 80.7 KB
/
interdependency_analysis.py
File metadata and controls
1596 lines (1430 loc) · 80.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# -*- coding: utf-8 -*-
""" Previuosly was v14, the most up to date and working code
Created on Mon Jul 23 14:26:27 2012
@author: Craig Robson
"""
'''
Code layout
The mian function is used to define the analysis type and the networks involved in the analysis.
First choice is single or cascading analysis
If cascading selected, must also choice a type of node selection
degree, betweenness, random, cascading
This is done by changing the relevant false statement to true
Only one method can be selected(True) at a time
From the main function a following funtion is called based upon the first selection (single or cascading)
These functions then call other functions which combined performed the requested analysis
For the average path length calcualtion, use a edge length field, which must be the same in both tables (in this version anyway)
For the developer:
From the cascading_analysis function the node selection function is called
Returns the graph with relevant node and edges removed(used remove_edges function)
Then calls the analysis function which performs the majority of the analysis (from within a loop in the cascading analysis function)
This takes the node removed and the graph and 'cleans' the graph so suitable for analysis
Removes isolated nodes and dissconnected edges (uses clean_graph function)
Then removes any subgraphs, using handle_sub_graphs function
Finally the results module is called and the results printed and wrote to file
File path defined at top of code
'''
__author__ = "Craig Robson"
__created__ = ""
__year__ = "2014"
__version__ = "5.3.2"
#standard modules
import os, sys, random
import networkx as nx
#custom modules
#sys.path.append("C:\\a8243587_DATA\\GitRepo\\resilience\\modules")
#sys.path.append("C:\\Users\\Craig\\GitRepo\\resilience\\modules")
#sys.path.append("H:\\A-PHD\\resilience\\modules")
sys.path.append(os.path.dirname(__file__)+"\modules")
import tools,error_classes,failure_methods,network_handling,outputs
def import_modules(resil_mod_loc):
sys.path.append(resil_mod_loc)
global tools, error_classes, failure_methods, network_handling, outputs
import tools,error_classes,failure_methods,network_handling,outputs
def main(GA, GB, parameters, logfilepath, viewfailure=False, when_to_calc_metrics=True,failures_to_occur=False):
'''
This is the main control function when the analysis is run directly from
the script. This reads in the data provided and processes it as such to run
the desired analysis.
Input: up to two networks, parameters and a logfile path.
Returns: a boolean varalible stating if the analysis has been completed.
'''
'''
try:
import_modules(os.path.dirname(__file__)+"\modules")
tools.write_to_log_file(logfilepath, 'Imported modules.')
except:
# cant write to log file as cant access the module to do this
return 1102
'''
print('in main function')
try:
var = tools.write_to_log_file(logfilepath, 'In function after importing modules')
except:
return var
metrics,failure,handling_variables,fileName,a_to_b_edges,write_step_to_db,write_results_table,db_parameters,store_n_e_atts,length,source_nodes_A,source_nodes_B = parameters
tools.write_to_log_file(logfilepath, 'Calculating the inital metrics.')
#------set up the metrics for the analysis being asked for-----------------
try:
var = metrics_initial(GA,GB,metrics,failure,handling_variables,store_n_e_atts,length,a_to_b_edges,source_nodes_A,source_nodes_B, logfilepath)
except:
return 1106
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Failure in metrics_initial function.')
return var
else:
networks,metrics,graphparameters = var
networks,i,node_list,to_b_nodes,from_a_nodes = graphparameters
basicA,basicB,optionA,optionB,interdependency_metrics,cascading_metrics = metrics
parameters=failure,handling_variables,fileName,a_to_b_edges,write_step_to_db,write_results_table,db_parameters,store_n_e_atts,length
node_to_fail_list = {}
#------run some tests to check inputs are correct--------------------------
try:
var = tools.check_inputs(failure)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Incorrect input parameters were specified.')
return var
except:
return 1108
tools.write_to_log_file(logfilepath,'Checked inputs and returned as correct.')
i = 0;iterate = True
#--------------------node and edge attributes------------------------------
if store_n_e_atts:
'''
this will be done in the metric computation sections as this results
in less repitition in computations.
Once complete, will remove this from code.
'''
pass
#-----------------write networks to database t = 0-------------------------
#GA,GB,GA,GB=networks
if write_step_to_db:
var = outputs.write_to_db(networks,a_to_b_edges,failure,db_parameters,i)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Could not write network to database.' %(var))
return var
#-----------------write metrics to database table t = 0--------------------
if write_results_table:
var = outputs.write_results_table(metrics,i,failure,db_parameters,k=0)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Could not write results to table in database.' %(var))
return var
i +=1
graphparameters=networks,i,node_list,to_b_nodes,from_a_nodes,source_nodes_A,source_nodes_B
#run the analysis if seuquential or cascading == true
if failure['sequential']==True or failure['cascading']==True or failure['stand_alone']==True:
#while iterate is still true- network still has edges left
while iterate == True:
print('----------------------------------------------')
print('i is:', i)
#-------------update log file (if file path set)-------------------
#tools.write_to_log_file(logfilepath,'Initiating step')
#-------------run the step-----------------------------------------
graphparameters = networks,i,node_list,to_b_nodes,from_a_nodes,source_nodes_A, source_nodes_B
try:
var = step(graphparameters,parameters,metrics,iterate,logfilepath,when_to_calc_metrics,failures_to_occur,node_to_fail_list)
if type(var) == int:
tools.write_to_log_file(logfilepath,'Error code %s returned. Error running step %s.' %(var,i))
return var
else:
gaphparameters,parameters,metrics,iterate = var
except:
tools.write_to_log_file(logfilepath,'Error running step %s. Failed.' %(i))
return 1100
#-------------unpack variables-------------------------------------
basicA,basicB,optionA,optionB,interdependency,cascading = metrics
networks,k,node_list,to_b_nodes,from_a_nodes,source_nodes_A, source_nodes_B = graphparameters
#-------------write networks to database---------------------------
if write_step_to_db:
var = outputs.write_to_db(networks,a_to_b_edges,failure,db_parameters,i)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Error when writing network for step %s to database.' %(var, i))
return var
#-------------write metrics to database table----------------------
if write_results_table:
var = outputs.write_results_table(metrics,i,failure,db_parameters,k)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Errors when writing metric values to databse for step %s.' %(var, i))
return var
if i == 2000 or i == 4000 or i == 6000 or i == 8000 or i == 10000 or i == 12000 or i == 14000 or i == 16000 or i == 18000 or i == 20000 or i == 22000 or i == 24000 or i == 26000 or i == 28000 or i == 30000:
# this is where I want to write some results our and clear the data holders
#var = outputs.outputresults(graphparameters,parameters,metrics,logfilepath=None)
basicA,basicB,optionA,optionB,dependency,cascading=metrics
outfile = fileName.replace('.txt','_%s.txt'%i)
of = open(outfile,'a')
for key in basicA:
of.write('%s: %s\n' %(key,basicA[key]))
if type(basicA[key]) == list:
pass
for key in optionA:
of.write('%s: %s\n' %(key,optionA[key]))
if type(optionA[key]) == list:
optionA[key] = [optionA[key][-1]]
of.close()
tools.write_to_log_file(logfilepath,'Written some results out')
#-------------update log file (if file path set)-------------------
try:
tools.write_to_log_file(logfilepath,'Step %s completed.(%s nodes left; %s edges left).' %(i,basicA['no_of_nodes'][-1],basicA['no_of_edges'][-1]))
except:pass
#-------------stop if stand alone and all nodes have been removed--
if failure['stand_alone']==True:
if i == nx.number_of_nodes(networks[2]): iterate = False
#-------------update i as iteration finished-----------------------
print('updating i:', i)
i += 1
if iterate == False:
try:
#no edges left so output results
var = outputs.outputresults(graphparameters,parameters,metrics,logfilepath=None)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Failed when trying to write the final set of results out at end of analysis.' %(var))
return var
else:
complete = True
except:
tools.write_to_log_file(logfilepath, 'Failed when trying to export results. No error code returned.')
return 1005
elif failure['stand_alone']==True:
pass
else:
#print "No analysis method selected"
return 1101
#complete = None
#print "Completed simulation"
#update log file - only works if file path is set
tools.write_to_log_file(logfilepath,'Completed analysis!')
return complete
def step(graphparameters, parameters, metrics, iterate, logfilepath,when_to_calc_metrics,failures_to_occur,node_to_fail_list):
'''
Performs one time step of analysis when called.
Inputs: graphparameters, parameters iterate
Returns: graphparameters, iterate
'''
#----------------unpack all the data containers----------------------------
failure,handling_variables,fileName,a_to_b_edges,write_step_to_db,write_results_table,db_parameters,store_n_e_atts,length = parameters
basicA,basicB,optionA,optionB,dependency,cascading = metrics
networks,i,node_list,to_b_nodes,from_a_nodes,source_nodes_A,source_nodes_B = graphparameters
GA, GB, GtempA, GtempB = networks
#calc_metrics_all_steps == True #or = 10 for example
#----------------perform the analsis---------------------------------------
#----------------for sequential analysis only------------------------------
#look at adding ability to remove nodes from both A and B
if failure['sequential'] and failure['single']==False and failure['cascading']==False:
failures_to_occur = 3
try:
if failure['degree']:
#find node based on highest degree and remove it
var = failure_methods.sequential_degree(GtempA,failure['interdependency'])
if type(var) == int:
pass
else: GtempA,node = var
elif failure['betweenness']:
if failures_to_occur == False:
#find node with highest betweenness value and remove it
var = failure_methods.sequential_betweenness(GtempA,failure['interdependency'])
if type(var) == int:
pass
else: GtempA,node = var
elif failures_to_occur == True:
tools.write_to_log_file(logfilepath,'Error in finding node to remove. Fail to calc variable incorrectly set.')
return 1001
else:
#used to remove features using a pre-calculated list
print('Using listing betweenness method')
var = failure_methods.sequential_betweenness_by_list(GtempA,failure['interdependency'],failures_to_occur,node_to_fail_list)
if type(var) == int:
print(var)
pass
else:
GtempA,node,node_to_fail_list = var
print('Generated list:', node_to_fail_list)
elif failure['from_list']!=False:
print('THIS METHOD HAS NOT BEEN FINISHED OR TESTED')
fail_list = []
var = failure_methods.sequential_from_list(GtempA,failure['interdependency'],fail_list,i)
if type(var) == int:
pass
else: GtempA,node = var
elif failure['random']:
#randomly select the next node and remove it
var = failure_methods.sequential_random(GtempA, handling_variables['no_isolates'],failure['interdependency'])
if type(var) == int:
pass
else: GtempA,node = var
elif failure['flow']:
#select the node with the greatest flow - uses field named 'flow'
var = failure_methods.sequential_flow(GtempA, handling_variables['no_isolates'],failure['interdependency'])
if type(var) == int:
pass
else: GtempA,node = var
else:
tools.write_to_log_file(logfilepath, 'Error in failure dictionary - no component selection method chosen (set as True). Sequential process chosen.')
return 1001
except:
tools.write_to_log_file(logfilepath, 'Error in finding the next node to remove given selection method. Sequential process chosen.')
return 1002
#update the counter
#basicA['no_of_nodes_removed'].append(len(basicA['no_of_nodes_removed']))
basicA['no_of_nodes_removed'].append(basicA['no_of_nodes_removed'][len(basicA['no_of_nodes_removed'])-1]+1)
#-----removes source node from list if it is the selected node
if source_nodes_A != None:
for nd in source_nodes_A:
if node == nd:
source_nodes_A.remove(node)
break
#----------------for cascading analysis------------------------------------
elif failure['cascading'] and failure['single']==False and failure['sequential']==False:
#unpack the cascading metrics and create some blank containers
dead, dlist, removed_nodes, deadlist = cascading
#------------identify subnodes and isolated nodes--------------------
for g in nx.connected_component_subgraphs(GtempA):
if g.number_of_nodes == 1:
optionA['isolated_nodes'].append(g.nodes())
elif g.number_of_nodes != 0:
optionA['subnodes'].append(g.nodes())
else:
raise error_classes.GeneralError('Error. Component has zero nodes.')
#------------on the first time step only-----------------------
#need to initaite the failure through remoiving a node to begin with
if i == 1:
if failure['degree'] and failure['betweenness']==False and failure['random']==False:
var = tools.max_val_random(nx.degree(GtempA))
if type(var) == int:
return var+1
else: ma, dead = var
elif failure['betweenness']==True and failure['random']==False and failure['degree']==False:
var = tools.max_val_random(nx.betweenness_centrality(GtempA))
if type(var) == int:
return var+2
else: ma, dead = var
elif failure['random']==True and failure['degree']==False and failure['betweeness']==False:
dead = dead
elif failure['flow']==True:
ma, dead = tools.maximum_flow(GtempA)
#update the network and find the next set of nodes to remove
var = failure_methods.cascading_failure(GtempA,dlist,dead,i,basicA['subnodes'], basicA['isolated_nodes'],basicA['nodes_removed'],failure['interdependency'])
if type(var) == int:
return var
else: GtempA,dlist,removed_nodes,deadlist = var
node = deadlist
#------------on all but first time step----------------------------
else:
#update the network and find the next set of nodes to remove
GtempA,dlist,removed_nodes,deadlist = failure_methods.cascading_failure(GtempA,dlist,dead,i,optionA['subnodes'],optionA['isolated_nodes'],basicA['nodes_removed'],failure['interdependency'])
node = deadlist
#update metric
basicA['no_of_nodes_removed'].append(basicA['no_of_nodes_removed'][len(basicA['no_of_nodes_removed'])-1]+len(deadlist))
#-----removes source node from list if it is the selected node
if source_nodes_A != None:
for nd in source_nodes_A:
for nde in deadlist:
if nde == nd:
source_nodes_A.remove(nde)
break
#------------package cascading metrics together----------------------
cascading = dead, dlist, removed_nodes, deadlist
#----------------for single analysis---------------------------------------
elif failure['single']==True and failure['sequential']==False and failure['cascading']==False:
#create a copy of the original network - will be complete
GtempA = GA.copy()
#select and remove a node from the network
var = failure_methods.single_random(GtempA, node_list, failure['interdependency'])
if type(var) == int:
return var
else: GtempA,node = var
#------------when node list is empty change iterate------------------
if node_list == []:
iterate = False
#update the metric
basicA['no_of_nodes_removed'].append(1)
#-----removes source node from list if it is the selected node
if source_nodes_A != None:
for nd in source_nodes_A:
if node == nd:
source_nodes_A.remove(node)
break
#----------------update the list of removed nodes--------------------------
basicA['nodes_removed'].append([node])
basicA['nodes_selected_to_fail'].append([node])
#----------------re-package networks and metrics which have been changed---
networks = GA, GB , GtempA, GtempB
#----------------functions for analysis methods----------------------------
if failure['interdependency'] and failure['stand_alone']==False and failure['dependency']==False:
'''Needs to be developed'''
#basicB['nodes_selected_to_fail'].append([])
pass
elif failure['dependency'] and failure['stand_alone']==False and failure['interdependency']==False:
'''
'''
#special one for cascading as loop needed to handle multiple network A nodes being removed in one iteration
#to store all nodes for intire iteration which are removed from networkB due to broken dependence link
temp = []
#------------if cascading analysis-----------------------------------
#this needs checking
if failure['cascading']==True:
x = 0
while x < len(deadlist):
node = deadlist[x]
var = network_handling.check_dependency_edges(GtempA, GtempB, node, to_b_nodes, from_a_nodes, temp)
GtempA, GtempB, to_b_nodes, from_a_nodes, temp = var
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Could not find chosen node ro remove (check_dependency_edges).' %(var))
return var
x += 1
#------------run for all other analysis scenarios--------------------
else:
if GtempA.number_of_edges() != 0:
nodes_removed=[]
#are we using source nodes
if source_nodes_A != None:
var = network_handling.check_connected_to_source_nodes(GtempA,source_nodes_A,nodes_removed)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned.' %(var))
return var
else:
GtempA,nodes_removed = var
#write changes out to metrics
if optionA['failed_no_con_to_a_source']!=False:
optionA['failed_no_con_to_a_source'].append(nodes_removed)
if optionA['source_nodes']!=False:
temp=[]
for nd in source_nodes_A: temp.append(nd)
optionA['source_nodes'].append(temp)
#add the selected node to remove to the list
nodes_removed.append(node)
#checking dependency edges
var = network_handling.check_dependency_edges(networks,nodes_removed,basicA,basicB,optionA,optionB,to_b_nodes,from_a_nodes,a_to_b_edges,temp,failure['interdependency'])
if type(var) == int:
raise error_classes.SearchError('Error. Could not find chosen node to remove it.')
tools.write_to_log_file(logfilepath, 'Error code %s returned. Could not find chosen node ro remove (check_dependency_edges).' %(var))
return var
else:
#un-pack variables returned
networks,nodes_removed_from_b,basicA,basicB,optionA,optionB,to_b_nodes,from_a_nodes,a_to_b_edges = var
GA, GB, GtempA, GtempB = networks
basicB['no_of_nodes_removed'].append(basicB['no_of_nodes_removed'][len(basicB['no_of_nodes_removed'])-1]+len(nodes_removed_from_b))
basicB['nodes_removed'].append(nodes_removed_from_b)
dependency['no_of_nodes_removed_from_B'].append(len(nodes_removed_from_b))
dependency['nodes_removed_from_B'].append(nodes_removed_from_b)
#check if any of the failed nodes are source nodes in B - remove from list if they are
if source_nodes_B != None:
for nd in nodes_removed_from_b:
try: source_nodes_B.remove(nd)
except:pass
else:
basicB['nodes_removed'].append([])
dependency['nodes_removed_from_B'].append([])
dependency['no_of_nodes_removed_from_B'].append(0)
if optionA['source_nodes'] != False: optionA['source_nodes'].append([])
if optionA['failed_no_con_to_a_source'] != False: optionA['failed_no_con_to_a_source'].append([])
if GtempB.number_of_edges() != 0:
if source_nodes_B != None:
nodes_removed=[]
var = network_handling.check_connected_to_source_nodes(GtempB,source_nodes_B,nodes_removed)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned.' %(var))
return var
else:
GtempB,nodes_removed = var
#write out to some metrics
if optionB['failed_no_con_to_a_source'] != False:
optionB['failed_no_con_to_a_source'].append(nodes_removed)
if optionB['source_nodes'] != False:
temp=[]
for nd in source_nodes_B: temp.append(nd)
optionB['source_nodes'].append(temp)
else:
if optionB['source_nodes'] != False: optionB.append([])
if optionB['failed_no_con_to_a_source'] != False: optionB.append([])
#------------run the actual analysis---------------------------------
#analyse network B
try:
var = analysis_B(parameters,iterate,GtempB,i,to_b_nodes,from_a_nodes,node_list,basicB,optionB,to_b_nodes,from_a_nodes,source_nodes_B,when_to_calc_metrics,logfilepath,net='B')
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Failed running the post component removal analysis.')
return
else:
iterate,GtempB,i,to_a_nodes,from_b_nodes,a_to_b_edges,node_list,basicB,optionB,source_nodes_B = var
except:
tools.write_to_log_file(logfilepath, 'Failed when running analysis of network B.')
#analyse network A
try:
var = analysis_B(parameters,iterate,GtempA,i,to_b_nodes,from_a_nodes,node_list,basicA,optionA,to_b_nodes,from_a_nodes,source_nodes_A,when_to_calc_metrics,logfilepath,net='A')
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Failed running the post component removal analysis.' %(var))
return
else:
iterate,GtempA,i,to_a_nodes,from_b_nodes,a_to_b_edges,node_list,basicA,optionA,source_nodes_A = var
except:
tools.write_to_log_file(logfilepath, 'Failed when running analysis of network A.')
if i != -100: basicA['nodes_removed'].append(basicA['nodes_removed'].pop()+basicA['isolated_nodes_removed'][i])
if optionA['failed_no_con_to_a_source'] != False:
basicA['no_of_nodes_removed'].append(basicA['no_of_nodes_removed'].pop()+len(optionA['failed_no_con_to_a_source'][len(optionA['failed_no_con_to_a_source'])-1]))
if optionB['failed_no_con_to_a_source'] != False:
basicB['no_of_nodes_removed'].append(basicB['no_of_nodes_removed'].pop()+len(optionB['failed_no_con_to_a_source'][len(optionB['failed_no_con_to_a_source'])-1]))
#------------move counter on-----------------------------------------
i += 1
elif failure['stand_alone'] and failure['dependency']==False and failure['interdependency']==False :
try:
if GtempA.number_of_edges() != 0:
#check all nodes still contected to a source node if set
if source_nodes_A != None:
nodes_removed=[]
var = network_handling.check_connected_to_source_nodes(GtempA,source_nodes_A,nodes_removed)
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned.' %(var))
return var
else:
GtempA,nodes_removed = var
#write out to some metrics
if optionA['failed_no_con_to_a_source'] != False:
optionA['failed_no_con_to_a_source'].append(nodes_removed)
if optionA['source_nodes'] != False:
temp=[]
for nd in source_nodes_A: temp.append(nd)
optionA['source_nodes'].append(temp)
else:
if optionA['source_nodes'] != False: optionA['source_nodes'].append([])
if optionA['failed_no_con_to_a_source'] != False: optionA['failed_no_con_to_a_source'].append([])
except:
tools.write_to_log_file(logfilepath, 'Failed running flow checks on network.')
return 1010
#run the analysis
try:
var = analysis_B(parameters,iterate,GtempA,i,to_b_nodes,from_a_nodes,node_list,basicA,optionA,to_b_nodes, from_a_nodes,source_nodes_A,when_to_calc_metrics,logfilepath,net='A') #run the analysis
if type(var) == int:
tools.write_to_log_file(logfilepath, 'Error code %s returned. Failed running the post component removal analysis.')
return var
else:
iterate,GtempA,i,to_a_nodes,from_b_nodes,a_to_b_edges,node_list,basicA,optionA,source_nodes_A = var
except:
tools.write_to_log_file(logfilepath, 'Failed when running analysis. No error code returned.')
return 1011
if i != -100:
basicA['nodes_removed'].append(basicA['nodes_removed'].pop()+basicA['isolated_nodes_removed'][i])
if optionA['failed_no_con_to_a_source'] != False:
basicA['nodes_removed'].append(basicA['nodes_removed'].pop()+optionA['failed_no_con_to_a_source'][i])
if optionA['failed_no_con_to_a_source'] != False:
basicA['no_of_nodes_removed'].append(basicA['no_of_nodes_removed'].pop()+len(optionA['failed_no_con_to_a_source'][len(optionA['failed_no_con_to_a_source'])-1]))
i += 1
else:
tools.write_to_log_file(logfilepath, 'No analysis type selected.')
raise error_classes.GeneralError('Error. No analysis type has been selected.')
return 1003
#----------------re-package all data into respective containers------------
networks = GA, GB, GtempA, GtempB
metrics = basicA,basicB,optionA, optionB,dependency,cascading
graphparameters = networks,i,node_list,to_b_nodes,from_a_nodes,source_nodes_A,source_nodes_B
parameters = failure,handling_variables,fileName,a_to_b_edges,write_step_to_db,write_results_table,db_parameters,store_n_e_atts,length
var = graphparameters,parameters,metrics,iterate
return var
'''calcualte values at end of step'''
def analysis_B(parameters,iterate,Gtemp,i,to_a_nodes,from_b_nodes,node_list,basic_metrics,option_metrics,to_b_nodes,from_a_nodes,source_nodes,when_to_calc_metrics,logfilepath,net):
'''
Failure method has already been run. This checks for isolated nodes and
subgraphs (goes throught the handling avraibles, then calculates metrics
required.)
All errors from here bgein with 20. e.g. 2003
'''
'''
try:
tools.write_to_log_file(logfilepath, 'Entered analysis B function.')
except:
return 2000
'''
print('analysis B i is:', i)
#------------unpack the holding variables------------------------------
if len(parameters) == 9:
pass
else:
return 2051
try:
failure,handling_variables,fileName,a_to_b_edges,write_step_to_db,write_results_table,db_parameters,store_n_e_atts,length = parameters
except: return 2034
try:
basic_metrics['no_of_isolated_nodes'].append(len(nx.isolates(Gtemp)))
except: return 2035
#------------check for isoalted nodes----------------------------------
try:
if handling_variables['remove_isolates']==True:
if Gtemp.number_of_edges() != 0:
var = network_handling.remove_isolates(Gtemp,node_list,option_metrics,basic_metrics,to_b_nodes,from_a_nodes,a_to_b_edges,net)
if type(var) == int:
return var
else:
Gtemp,node_list,basic_metrics,option_metrics,isolated_nodes,to_b_nodes,from_a_nodes,a_to_b_edges = var
#not too sure why I need separate things for net a and b??
if net == 'B':
if option_metrics['isolated_nodes']!=False:
option_metrics['isolated_nodes'].append(isolated_nodes)
if option_metrics['no_of_isolated_nodes_removed']!=False:
option_metrics['no_of_isolated_nodes_removed'].append(len(isolated_nodes))
basic_metrics['isolated_nodes_removed'].append(isolated_nodes)
basic_metrics['no_of_nodes_removed'].append(basic_metrics['no_of_nodes_removed'].pop()+len(isolated_nodes))
basic_metrics['nodes_removed'].append(basic_metrics['nodes_removed'].pop()+basic_metrics['isolated_nodes_removed'][i])
if net == 'A':
basic_metrics['isolated_nodes_removed'].append(isolated_nodes)
if option_metrics['isolated_nodes']!=False:
option_metrics['isolated_nodes'].append(isolated_nodes)
if option_metrics['no_of_isolated_nodes_removed']!=False:
option_metrics['no_of_isolated_nodes_removed'].append(len(isolated_nodes))
if source_nodes != None:
for nd in isolated_nodes:
try: source_nodes.remove(nd)
except: pass
else:
if option_metrics['isolated_nodes']!=False:
option_metrics['isolated_nodes'].append([])
if option_metrics['no_of_isolated_nodes_removed']!=False:
option_metrics['no_of_isolated_nodes_removed'].append(0)
basic_metrics['isolated_nodes_removed'].append([])
elif handling_variables['remove_isolates']==False:
if option_metrics['no_of_isolated_nodes_removed'] != False:
option_metrics['no_of_isolated_nodes_removed'].append(0)
basic_metrics['isolated_nodes_removed'].append([])
if option_metrics['isolated_nodes']!=False:
option_metrics['isolated_nodes'].append([nx.isolates(Gtemp)])
except: return 2028
#----------------if the graph is still connected-----------------------
try:
num_edges = Gtemp.number_of_edges()
except: return 2036
try:
if num_edges != 0:
try:
#the graph is not dissconnected
nodelists = Gtemp.nodes()
edgelists = Gtemp.edges()
#if subgraphs are to be removed for the analysis ie. for infrastructure modelling
if handling_variables['remove_subgraphs']==True:
#remove subgraphs and record the details of them
var = network_handling.handle_sub_graphs(Gtemp)
if type(var) == int:
return var
else:
Gtemp, subnodes, nsubnodes, nodelists, edgelists = var
if option_metrics['subnodes'] != False:
option_metrics['subnodes'].append(subnodes)
basic_metrics['no_of_nodes_removed'].append(basic_metrics['no_of_nodes_removed'].pop()+nsubnodes)
if option_metrics['no_of_subnodes'] != False:
option_metrics['no_of_subnodes'].append(nsubnodes)
nodes_removed = basic_metrics['nodes_removed'].pop()
for subgraph in subnodes:
for nd in subgraph:
nodes_removed.append(nd)
basic_metrics['nodes_removed'].append(nodes_removed)
if source_nodes != None:
for nd in nodes_removed:
try: source_nodes.remove(nd)
except: pass
var = network_handling.clean_node_lists(subnodes,node_list,to_b_nodes,from_a_nodes)
if type(var) == int:
return var
else:
node_list, to_b_nodes, from_a_nodes = var
temp = nx.connected_component_subgraphs(Gtemp)
basic_metrics['no_of_components'].append(len(temp))
#if subgraphs are to be left as part of the network
elif handling_variables['remove_subgraphs']==False:
#get a list of all connected components
temp = nx.connected_component_subgraphs(Gtemp)
number_of_components = nx.number_connected_components(Gtemp)
#add the number components to the respective list
basic_metrics['no_of_components'].append(number_of_components)
temp=[]
if option_metrics['subnodes']!=False:
for g in nx.connected_component_subgraphs(Gtemp):
temp.append(g.nodes()) #list of subgraphs
count = 1 #ignore the first one
no_of_subnodes = 0
temp2 = []
while count < len(temp):
temp2.append(temp[count])
no_of_subnodes+=(len(temp[count]))
count+=1
option_metrics['subnodes'].append(temp2)
if option_metrics['no_of_subnodes']!=False:
option_metrics['no_of_subnodes'].append(no_of_subnodes)
else:
#there is an error with the variable
raise error_classes.GeneralError('Error. Variable REMOVE_SUBGRAPHS must be set as True or False only.')
except: return 2050
#------------run if no edges left--------------------------------------
elif num_edges == 0:
try:
#at the last removal of a node, all remaining edges were consequently remove
#update the metrics
if option_metrics['subnodes'] != False: option_metrics['subnodes'].append([])
if option_metrics['no_of_subnodes'] != False: option_metrics['no_of_subnodes'].append(0)
except: return 2051
except:
return 2036
#-------metric calcs which work no matter the state of the network-----
try:
if option_metrics['size_of_components']!=False:
try:
temp = []
for g in nx.connected_component_subgraphs(Gtemp):
temp.append(g.number_of_nodes())
option_metrics['size_of_components'].append(temp)
except: return 2027
#when_to_calc_metrics = 5 #here for testing
if when_to_calc_metrics != True: # if it has been set as number
calc_metrics = False
calc_value = 0
while calc_metrics == False:
if i == calc_value:
calc_metrics = True
elif calc_value > i:
break
else:
calc_value = calc_value + when_to_calc_metrics
elif when_to_calc_metrics == True:
calc_metrics = True
if calc_metrics == True:
if option_metrics['maximum_betweenness_centrality']!=False or option_metrics['avg_betweenness_centrality']!=False:
try:
temp = nx.betweenness_centrality(Gtemp)
except:
tools.write_to_log_file(logfilepath, 'Failed when calculating the betweenness centrality.')
return 2006
if option_metrics['maximum_betweenness_centrality']!=False:
option_metrics['maximum_betweenness_centrality'].append(max(temp.values()))
if option_metrics['avg_betweenness_centrality']!=False:
try:
avg=0.0
for val in list(temp.values()):
avg+=val
option_metrics['avg_betweenness_centrality'].append(avg/len(temp))
except: return 2026
if store_n_e_atts == True:
for key in temp: Gtemp.node[key]['betweenness_centrality'] = temp[key]
if option_metrics['clustering_coefficient']!=False:
try:
temp = nx.clustering(Gtemp)
failed_calc = False
except:
tools.write_to_log_file(logfilepath, 'Failed when calculating the clustering coefficient.')
option_metrics['clustering_coefficient'].append(-9999)
failed_calc = True
pass
#return 2007
if failed_calc == False:
try:
avg = sum(temp.values())/len(temp)
except: return 2032
try:
option_metrics['clustering_coefficient'].append(avg)
except:
tools.write_to_log_file(logfilepath, 'Failed adding the clusteirng coefficeint value to the metric list. The list is:')
tools.write_to_log_file(logfilepath, option_metrics['clustering_coefficient'])
tools.write_to_log_file(logfilepath, avg)
#return 2031
if store_n_e_atts == True:
for key in temp: Gtemp.node[key]['clustering_coefficient'] = temp[key]
if option_metrics['transitivity']!=False:
try:
option_metrics['transitivity'].append(nx.transitivity(Gtemp))
except:
tools.write_to_log_file(logfilepath, 'Failed to calculate the transitivity.')
return 2008
if option_metrics['square_clustering']!=False:
try:
temp = nx.square_clustering(Gtemp)
avg=0.0
for val in list(temp.values()):
avg+=val
avg = avg/len(temp)
except:
tools.write_to_log_file(logfilepath, 'Failed to calculate the square clustering coefficient.')
return 2009
option_metrics['square_clustering'].append(avg)
if store_n_e_atts == True:
for key in temp: Gtemp.node[key]['square_clustering'] = temp[key]
if option_metrics['avg_degree_connectivity'] != False:
try:
temp = nx.average_degree_connectivity(Gtemp)
except:
tools.write_to_log_file(logfilepath, 'Failed calculating the average degree connectivity.')
return 2010
option_metrics['avg_degree_connectivity'].append(list(temp.values()))
if option_metrics['avg_closeness_centrality'] != False:
try:
temp = nx.closeness_centrality(Gtemp)
avg=0.0
for val in list(temp.values()):
avg+=val
avg = avg/len(temp)
except:
tools.write_to_log_file(logfilepath, 'Failed calculating the closeness centrality.')
return 2011
option_metrics['avg_closeness_centrality'].append(avg)
if store_n_e_atts == True:
for key in temp:
Gtemp.node[key]['avg_closeness_centrality'] = temp[key]
if option_metrics['avg_neighbor_degree'] != False:
try:
temp = nx.average_neighbor_degree(Gtemp)
avg=0.0
for val in list(temp.values()):
avg+=val
avg = avg/len(temp)
except:
tools.write_to_log_file(logfilepath, 'Failed to calculate the average neighbor degree.')
return 2012
option_metrics['avg_neighbor_degree'].append(avg)
if store_n_e_atts == True:
for key in temp: Gtemp.node[key]['avg_neighbor_degree'] = temp[key]
#elif calc_metrics == False:
#return 99999999999999
#exit()
else:
if option_metrics['maximum_betweenness_centrality']!=False:
option_metrics['maximum_betweenness_centrality'].append(-9998)
if option_metrics['avg_betweenness_centrality']!=False:
option_metrics['avg_betweenness_centrality'].append(-9998)
if option_metrics['clustering_coefficient']!=False:
option_metrics['clustering_coefficient'].append(-9998)
if option_metrics['transitivity']!=False:
option_metrics['transitivity'].append(-9998)
if option_metrics['square_clustering']!=False:
option_metrics['square_clustering'].append(-9998)
if option_metrics['avg_degree_connectivity']!=False:
option_metrics['avg_degree_connectivity'].append(-9998)
if option_metrics['avg_closeness_centrality']!=False:
option_metrics['avg_closeness_centrality'].append(-9998)
if option_metrics['avg_neighbor_degree']!=False:
option_metrics['avg_neighbor_degree'].append(-9998)
except:
return 2053
#------------re-calc the number of edges-------------------------------
#this is needed if subgraphs were removed
try:
numofedges = None
try:
numofedges = nx.number_of_edges(Gtemp)
except: pass
if numofedges == None:
try:
edges = Gtemp.edges()
numofedges = len(edges)
except:
return 20376
except:
tools.write_to_log_file(logfilepath, 'Failed to count the number of edges in the network.')
tools.write_to_log_file(logfilepath, 'Number of nodes in the network = %s' %(nx.number_of_nodes(Gtemp)))
return 2037
#------------if there are no edges left--------------------------------
try:
if numofedges == 0:
#set i really high so iteraion stops at the end of this step
i = -100
#add values for the metrics which are not set as False
try:
if option_metrics['avg_path_length'] != False:
option_metrics['avg_path_length'].append(0.0)
if option_metrics['avg_path_length_of_components']!=False:
option_metrics['avg_path_length_of_components'].append([0.0])
if option_metrics['avg_path_length_of_giant_component']!= False:
option_metrics['avg_path_length_of_giant_component'].append(0.0)
if option_metrics['avg_geo_path_length'] != False:
option_metrics['avg_geo_path_length'].append(0.0)
if option_metrics['avg_geo_path_length_of_components']!=False:
option_metrics['avg_geo_path_length_of_components'].append([0.0])
if option_metrics['avg_geo_path_length_of_giant_component']!=False:
option_metrics['avg_geo_path_length_of_giant_component'].append(0.0)
if option_metrics['giant_component_size'] != False:
option_metrics['giant_component_size'].append(0)
if option_metrics['avg_degree'] != False:
option_metrics['avg_degree'].append(0)
if option_metrics['density']!=False:
option_metrics['density'].append(0.0)
if option_metrics['assortativity_coefficient']!=False:
option_metrics['assortativity_coefficient'].append(0.0)
if option_metrics['avg_degree_centrality']!=False:
option_metrics['avg_degree_centrality'].append(0.0)
if option_metrics['diameter']!=False:
option_metrics['diameter'].append(0.0)
#if option_metrics['avg_size_of_components']<>False: option_metrics['avg_size_of_components']=0
basic_metrics['no_of_components'].append(nx.number_connected_components(Gtemp))
basic_metrics['no_of_edges'].append(0)
except:
return 2021
#set iterate as False so it stops after this time step
iterate = False
#------------if the number of edge is greater than zero----------------
elif numofedges != 0:
try:
#---------------average path length calculations-------------------
if option_metrics['avg_path_length'] != False or option_metrics['avg_path_length_of_components']!=False:
try:
#claculates the average path length of the whole network if not dissconnected
#average = network_handling.whole_graph_av_path_length(Gtemp)
temp=[]
for g in nx.connected_component_subgraphs(Gtemp):
try:
temp.append(nx.average_shortest_path_length(g))
except:
# could not calculate the averag path length
tools.write_to_log_file(logfilepath, 'Escaped from calculating the average path length of a subgraph. Did not add it to the list.')
pass
try:
if temp == []:
tools.write_to_log_file(logfilepath, 'The path length for all components of the network could not be calculated')
return 2016
except:
return 2017
try:
tools.write_to_log_file(logfilepath,'Option_metrics[avg_path_length] = %s.' %(option_metrics['avg_path_length']))
if option_metrics['avg_path_length']!=False:
try:
option_metrics['avg_path_length'].append(sum(temp)/len(temp))
except:
return 2019
except:
return 2018
try:
if option_metrics['avg_path_length_of_components']!=False:
option_metrics['avg_path_length_of_components'].append(temp)
except:
return 2020
except:
tools.write_to_log_file(logfilepath,'Failed to calculate the average path length.')
return 2001
if option_metrics['avg_path_length_of_giant_component'] != False and option_metrics['avg_path_length']!=False:
try:
option_metrics['avg_path_length_of_giant_component'].append(temp[0])
except: