From 0c41adb2c98980f07a963846636f163e1755ca80 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 15:42:30 +0900 Subject: [PATCH 1/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index bf9752c..9276e69 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # SuperCATs -For more information, check out the paper on [paper link](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [Project Page link].
+For more information, check out the paper on [paper link](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [Project Page link](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22* From 6e8e3ff5c3a8f739b09ec16eee24a0954d06a45f Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 15:42:50 +0900 Subject: [PATCH 2/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9276e69..a48232c 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # SuperCATs -For more information, check out the paper on [paper link](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [Project Page link](https://ku-cvlab.github.io/SuperCATs/).
+For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [Project Page link](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22* From 3c07bb7e0e6f609aceb0bf939cc8797d9e5b39d1 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 15:43:02 +0900 Subject: [PATCH 3/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a48232c..a8ef28d 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # SuperCATs -For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [Project Page link](https://ku-cvlab.github.io/SuperCATs/).
+For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [[Project Page link]](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22* From 614f79cf830dce3329df4847ff981b6da3327134 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 19:12:54 +0900 Subject: [PATCH 4/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a8ef28d..34b9bd4 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [[Project Page link]](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22* - + >**Cost Aggregation with Transformers for Sparse Correspondence**

>Abstract : In this work, we introduce a novel network, namely SuperCATs, which aims to find a correspondence field between visually similar images. SuperCATs stands on the shoulder of the recently proposed matching networks, SuperGlue and CATs, taking the merits of both for constructing an integrative framework. Specifically, given keypoints and corresponding descriptors, we first apply attentional aggregation consisting of self- and cross- graph neural network to obtain feature descriptors. Subsequently, we construct a cost volume using the descriptors, which then undergoes a tranformer aggregator for cost aggregation. With this approach, we manage to replace the handcrafted module based on solving an optimal transport problem initially included in SuperGlue with a transformer well known for its global receptive fields, making our approach more robust to severe deformations. We conduct experiments to demonstrate the effectiveness of the proposed method, and show that the proposed model is on par with SuperGlue for both indoor and outdoor scenes. From 974ee202571b13d9f1bd148bf1805dacb1764f20 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 19:13:06 +0900 Subject: [PATCH 5/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 34b9bd4..40fe14f 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [[Project Page link]](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22* - + >**Cost Aggregation with Transformers for Sparse Correspondence**

>Abstract : In this work, we introduce a novel network, namely SuperCATs, which aims to find a correspondence field between visually similar images. SuperCATs stands on the shoulder of the recently proposed matching networks, SuperGlue and CATs, taking the merits of both for constructing an integrative framework. Specifically, given keypoints and corresponding descriptors, we first apply attentional aggregation consisting of self- and cross- graph neural network to obtain feature descriptors. Subsequently, we construct a cost volume using the descriptors, which then undergoes a tranformer aggregator for cost aggregation. With this approach, we manage to replace the handcrafted module based on solving an optimal transport problem initially included in SuperGlue with a transformer well known for its global receptive fields, making our approach more robust to severe deformations. We conduct experiments to demonstrate the effectiveness of the proposed method, and show that the proposed model is on par with SuperGlue for both indoor and outdoor scenes. From a224d2ef61edc902e28c70f489ad5da03e86fcb0 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 19:13:16 +0900 Subject: [PATCH 6/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 40fe14f..daa2ac9 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [[Project Page link]](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22* - + >**Cost Aggregation with Transformers for Sparse Correspondence**

>Abstract : In this work, we introduce a novel network, namely SuperCATs, which aims to find a correspondence field between visually similar images. SuperCATs stands on the shoulder of the recently proposed matching networks, SuperGlue and CATs, taking the merits of both for constructing an integrative framework. Specifically, given keypoints and corresponding descriptors, we first apply attentional aggregation consisting of self- and cross- graph neural network to obtain feature descriptors. Subsequently, we construct a cost volume using the descriptors, which then undergoes a tranformer aggregator for cost aggregation. With this approach, we manage to replace the handcrafted module based on solving an optimal transport problem initially included in SuperGlue with a transformer well known for its global receptive fields, making our approach more robust to severe deformations. We conduct experiments to demonstrate the effectiveness of the proposed method, and show that the proposed model is on par with SuperGlue for both indoor and outdoor scenes. From 477f0c74f2d9a83b2f17d46fe1cddfa30e0b42e5 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Fri, 30 Dec 2022 19:48:18 +0900 Subject: [PATCH 7/9] Delete sjlee_backup directory --- sjlee_backup/IMC.py | 212 --------- sjlee_backup/IMCsuperglue.py | 192 --------- ...MC\353\202\230\354\244\221\354\227\220.py" | 221 ---------- sjlee_backup/__pycache__/IMC.cpython-38.pyc | Bin 4395 -> 0 bytes .../__pycache__/IMC_backup.cpython-38.pyc | Bin 4053 -> 0 bytes .../__pycache__/IMCcopy.cpython-38.pyc | Bin 4308 -> 0 bytes .../__pycache__/IMCsuperglue.cpython-38.pyc | Bin 4219 -> 0 bytes sjlee_backup/__pycache__/loss.cpython-38.pyc | Bin 624 -> 0 bytes .../__pycache__/losssuperglue.cpython-38.pyc | Bin 632 -> 0 bytes .../__pycache__/superglue.cpython-38.pyc | Bin 10052 -> 0 bytes .../__pycache__/superglue2.cpython-38.pyc | Bin 10700 -> 0 bytes .../__pycache__/superpoint.cpython-38.pyc | Bin 6025 -> 0 bytes .../cats/__pycache__/cats.cpython-38.pyc | Bin 13727 -> 0 bytes .../cats/__pycache__/cats.cpython-39.pyc | Bin 13671 -> 0 bytes .../cats/__pycache__/mod.cpython-38.pyc | Bin 7573 -> 0 bytes sjlee_backup/cats/cats.py | 404 ------------------ .../__pycache__/resnet.cpython-38.pyc | Bin 11196 -> 0 bytes sjlee_backup/cats/feature_backbones/resnet.py | 342 --------------- sjlee_backup/cats/mod.py | 213 --------- sjlee_backup/loss.py | 19 - sjlee_backup/losssuperglue.py | 19 - sjlee_backup/superglue.py | 359 ---------------- sjlee_backup/superglue2.py | 326 -------------- sjlee_backup/superpoint.py | 222 ---------- sjlee_backup/train_pseudo.py | 41 -- 25 files changed, 2570 deletions(-) delete mode 100644 sjlee_backup/IMC.py delete mode 100644 sjlee_backup/IMCsuperglue.py delete mode 100644 "sjlee_backup/IMC\353\202\230\354\244\221\354\227\220.py" delete mode 100644 sjlee_backup/__pycache__/IMC.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/IMC_backup.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/IMCcopy.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/IMCsuperglue.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/loss.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/losssuperglue.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/superglue.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/superglue2.cpython-38.pyc delete mode 100644 sjlee_backup/__pycache__/superpoint.cpython-38.pyc delete mode 100644 sjlee_backup/cats/__pycache__/cats.cpython-38.pyc delete mode 100644 sjlee_backup/cats/__pycache__/cats.cpython-39.pyc delete mode 100644 sjlee_backup/cats/__pycache__/mod.cpython-38.pyc delete mode 100644 sjlee_backup/cats/cats.py delete mode 100644 sjlee_backup/cats/feature_backbones/__pycache__/resnet.cpython-38.pyc delete mode 100644 sjlee_backup/cats/feature_backbones/resnet.py delete mode 100644 sjlee_backup/cats/mod.py delete mode 100644 sjlee_backup/loss.py delete mode 100644 sjlee_backup/losssuperglue.py delete mode 100644 sjlee_backup/superglue.py delete mode 100644 sjlee_backup/superglue2.py delete mode 100644 sjlee_backup/superpoint.py delete mode 100644 sjlee_backup/train_pseudo.py diff --git a/sjlee_backup/IMC.py b/sjlee_backup/IMC.py deleted file mode 100644 index a7cbe25..0000000 --- a/sjlee_backup/IMC.py +++ /dev/null @@ -1,212 +0,0 @@ - -import os -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F - -import numpy as np -from functools import partial - -from pydoc import source_synopsis -from sjlee_backup.superglue2 import SuperGlue, normalize_keypoints, arange_like, log_optimal_transport -from sjlee_backup.losssuperglue import loss_superglue - -sys.path.append(os.path.join(os.path.dirname(__file__), 'cats')) -from sjlee.cats.cats import TransformerAggregator ########################################################### - -def dfs_freeze(model): - for name, child in model.named_children(): - for param in child.parameters(): - param.requires_grad = False - - dfs_freeze(child) - -def softmax_with_temperature(x, beta=2., d = 1): - r'''SFNet: Learning Object-aware Semantic Flow (Lee et al.)''' - M, _ = x.max(dim=d, keepdim=True) - x = x - M # subtract maximum value for stability - exp_x = torch.exp(x/beta) - exp_x_sum = exp_x.sum(dim=d, keepdim=True) - return exp_x / exp_x_sum - -# positional embedding 필요한가? -# M * N 크기가 다 다른 문제 -class SimpleSuperCATs(SuperGlue): - def __init__(self, - config, - feature_size=32, - feature_proj_dim=128, - depth=4, - num_heads=4, - mlp_ratio=4, - ): - super().__init__(config) - - # freeze superglue's layers - dfs_freeze(self.kenc) - dfs_freeze(self.gnn) - dfs_freeze(self.final_proj) - - self.feature_size = feature_size - self.feature_proj_dim = feature_proj_dim - self.decoder_embed_dim = self.feature_size ** 2 - - self.decoder = TransformerAggregator( - img_size=self.feature_size, embed_dim=self.decoder_embed_dim, depth=depth, num_heads=num_heads, - mlp_ratio=mlp_ratio, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_hyperpixel=1 - ) - - def forward(self, data): - """Run SuperGlue on a pair of keypoints and descriptors""" - with torch.no_grad(): - - desc0, desc1 = data['descriptors0'], data['descriptors1'] - kpts0, kpts1 = data['keypoints0'], data['keypoints1'] - - - - desc0 = desc0.transpose(0,1) - desc1 = desc1.transpose(0,1) - kpts0 = torch.reshape(kpts0, (1, -1, 2)) - kpts1 = torch.reshape(kpts1, (1, -1, 2)) - - if kpts0.shape[1] == 0 or kpts1.shape[1] == 0: # no keypoints - shape0, shape1 = kpts0.shape[:-1], kpts1.shape[:-1] - return [], { - 'matches0': kpts0.new_full(shape0, -1, dtype=torch.int)[0], - 'matches1': kpts1.new_full(shape1, -1, dtype=torch.int)[0], - 'matching_scores0': kpts0.new_zeros(shape0)[0], - 'matching_scores1': kpts1.new_zeros(shape1)[0], - 'skip_train': True - } - - # Keypoint normalization. - kpts0 = normalize_keypoints(kpts0, data['image0'].shape) - kpts1 = normalize_keypoints(kpts1, data['image1'].shape) - - # Keypoint MLP encoder. - desc0 = desc0 + self.kenc(kpts0, torch.transpose(data['scores0'], 0, 1)) - desc1 = desc1 + self.kenc(kpts1, torch.transpose(data['scores1'], 0, 1)) - - # Multi-layer Transformer network. - desc0, desc1 = self.gnn(desc0, desc1) - - # Final MLP projection. - mdesc0, mdesc1 = self.final_proj(desc0), self.final_proj(desc1) - - # Compute matching descriptor distance. - scores = torch.einsum('bdn,bdm->bnm', mdesc0, mdesc1) - scores = scores / self.config['descriptor_dim']**.5 - - #scores[scores>30.] = 30. - #scores[scores<-80.] = -80. - #print(scores.max(), scores.min()) - - b, m, n = scores.shape - max_keypoints = self.feature_size ** 2 - if m + n < max_keypoints *2: - p2d = (0, max_keypoints-n, 0, max_keypoints-m) - scores = F.pad(scores, p2d, 'constant', 0.).type(scores.dtype) - - #print(scores.max(), scores.min()) - scores = self.decoder(scores[:, None, :, :]) - - scores = (softmax_with_temperature(scores)) - - - - #scores = self.decoder(scores[:, None, :, :]) - #print(scores.max(), scores.min()) - scores = scores[:, :m, :n] - #print(scores.max(), scores.min()) - - # Run the optimal transport. - ''' - scores = log_optimal_transport( - scores, self.bin_score, - iters=self.config['sinkhorn_iterations']) - ''' - # Get the matches with score above "match_threshold". - max0, max1 = scores[:, :, :].max(2), scores[:, :, :].max(1) - indices0, indices1 = max0.indices, max1.indices - mutual0 = arange_like(indices0, 1)[None] == indices1.gather(1, indices0) - mutual1 = arange_like(indices1 , 1)[None] == indices0.gather(1, indices1) - zero = scores.new_tensor(0) - mscores0 = torch.where(mutual0, max0.values, zero) - mscores1 = torch.where(mutual1, mscores0.gather(1, indices1), zero) - valid0 = mutual0 & (mscores0 > self.config['match_threshold']) - valid1 = mutual1 & valid0.gather(1, indices1) - indices0 = torch.where(valid0, indices0, indices0.new_tensor(-1)) - indices1 = torch.where(valid1, indices1, indices1.new_tensor(-1)) - - #print(mscores0.min(), mscores0.max()) - #print(mscores0) - - return scores, { - 'matches0': indices0[0], # use -1 for invalid match - 'matches1': indices1[0], # use -1 for invalid match - 'matching_scores0': mscores0[0], - 'matching_scores1': mscores1[0], - 'skip_train': False - } - - -if __name__ == '__main__': - from superpoint import SuperPoint - - config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': 1024 - }, - 'superglue': { - 'weights': 'outdoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2 - } - } - - """ - data = { - 'image0': torch.randn(1, 1, 512, 512), - 'image1': torch.randn(1, 1, 512, 512) - } - - superpoint = SuperPoint(config.get('superpoint', {})) - - output1 = superpoint({'image': data['image0']}) - output2 = superpoint({'image': data['image1']}) - - pred = {} - - pred = {**pred, **{k+'0': v for k, v in output1.items()}} - pred = {**pred, **{k+'1': v for k, v in output2.items()}} - - data = {**data, **pred} - - for k in data: - if isinstance(data[k], (list, tuple)): - data[k] = torch.stack(data[k]) - """ - - pred = { - 'keypoints0' : torch.randn(1, 1, 484, 2), - 'keypoints1' : torch.randn(1, 1, 484, 2), - 'descriptors0' : torch.randn(256, 1, 484), - 'descriptors1' : torch.randn(256, 1, 484), - 'scores0' : torch.randn(484, 1), - 'scores1' : torch.randn(484, 1), - 'image0' : torch.randn(1, 1, 512, 512), - 'image1' : torch.randn(1, 1, 512, 512), - # 'all_matches' : torch.randn(2, 1, 1248) - } - - superglue = SimpleSuperCATs(config.get('superglue', {})) - scores, output = superglue(pred) - - # loss = loss_superglue(scores, pred['all_matches'].permute(1, 2, 0)) - # print(loss) \ No newline at end of file diff --git a/sjlee_backup/IMCsuperglue.py b/sjlee_backup/IMCsuperglue.py deleted file mode 100644 index b7eb471..0000000 --- a/sjlee_backup/IMCsuperglue.py +++ /dev/null @@ -1,192 +0,0 @@ - -import os -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F - -import numpy as np -from functools import partial - -from pydoc import source_synopsis -from sjlee_backup.superglue2 import SuperGlue, normalize_keypoints, arange_like, log_optimal_transport -from sjlee_backup.losssuperglue import loss_superglue - -sys.path.append(os.path.join(os.path.dirname(__file__), 'cats')) -from cats import TransformerAggregator - -def dfs_freeze(model): - for name, child in model.named_children(): - for param in child.parameters(): - param.requires_grad = False - - dfs_freeze(child) - -def softmax_with_temperature(x, beta=2., d = 1): - r'''SFNet: Learning Object-aware Semantic Flow (Lee et al.)''' - M, _ = x.max(dim=d, keepdim=True) - x = x - M # subtract maximum value for stability - exp_x = torch.exp(x/beta) - exp_x_sum = exp_x.sum(dim=d, keepdim=True) - return exp_x / exp_x_sum - -# positional embedding 필요한가? -# M * N 크기가 다 다른 문제 -class SimpleSuperCATs(SuperGlue): - def __init__(self, - config, - feature_size=32, - feature_proj_dim=128, - depth=4, - num_heads=4, - mlp_ratio=4, - ): - super().__init__(config) - - # freeze superglue's layers - dfs_freeze(self.kenc) - dfs_freeze(self.gnn) - dfs_freeze(self.final_proj) - - self.feature_size = feature_size - self.feature_proj_dim = feature_proj_dim - self.decoder_embed_dim = self.feature_size ** 2 - - self.decoder = TransformerAggregator( - img_size=self.feature_size, embed_dim=self.decoder_embed_dim, depth=depth, num_heads=num_heads, - mlp_ratio=mlp_ratio, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_hyperpixel=1 - ) - - def forward(self, data): - """Run SuperGlue on a pair of keypoints and descriptors""" - with torch.no_grad(): - - desc0, desc1 = data['descriptors0'], data['descriptors1'] - kpts0, kpts1 = data['keypoints0'], data['keypoints1'] - - - - desc0 = desc0.transpose(0,1) - desc1 = desc1.transpose(0,1) - kpts0 = torch.reshape(kpts0, (1, -1, 2)) - kpts1 = torch.reshape(kpts1, (1, -1, 2)) - - if kpts0.shape[1] == 0 or kpts1.shape[1] == 0: # no keypoints - shape0, shape1 = kpts0.shape[:-1], kpts1.shape[:-1] - return [], { - 'matches0': kpts0.new_full(shape0, -1, dtype=torch.int)[0], - 'matches1': kpts1.new_full(shape1, -1, dtype=torch.int)[0], - 'matching_scores0': kpts0.new_zeros(shape0)[0], - 'matching_scores1': kpts1.new_zeros(shape1)[0], - 'skip_train': True - } - - # Keypoint normalization. - kpts0 = normalize_keypoints(kpts0, data['image0'].shape) - kpts1 = normalize_keypoints(kpts1, data['image1'].shape) - - # Keypoint MLP encoder. - desc0 = desc0 + self.kenc(kpts0, torch.transpose(data['scores0'], 0, 1)) - desc1 = desc1 + self.kenc(kpts1, torch.transpose(data['scores1'], 0, 1)) - - # Multi-layer Transformer network. - desc0, desc1 = self.gnn(desc0, desc1) - - # Final MLP projection. - mdesc0, mdesc1 = self.final_proj(desc0), self.final_proj(desc1) - - # Compute matching descriptor distance. - scores = torch.einsum('bdn,bdm->bnm', mdesc0, mdesc1) - scores = scores / self.config['descriptor_dim']**.5 - - #print(scores.max(), scores.min()) - - # Run the optimal transport. - - scores = log_optimal_transport( - scores, self.bin_score, - iters=self.config['sinkhorn_iterations']) - - # Get the matches with score above "match_threshold". - max0, max1 = scores[:, :-1, :-1].max(2), scores[:, :-1, :-1].max(1) - indices0, indices1 = max0.indices, max1.indices - mutual0 = arange_like(indices0, 1)[None] == indices1.gather(1, indices0) - mutual1 = arange_like(indices1 , 1)[None] == indices0.gather(1, indices1) - zero = scores.new_tensor(0) - mscores0 = torch.where(mutual0, max0.values.exp(), zero) - mscores1 = torch.where(mutual1, mscores0.gather(1, indices1), zero) - valid0 = mutual0 & (mscores0 > self.config['match_threshold']) - valid1 = mutual1 & valid0.gather(1, indices1) - indices0 = torch.where(valid0, indices0, indices0.new_tensor(-1)) - indices1 = torch.where(valid1, indices1, indices1.new_tensor(-1)) - - #print(mscores0.min(), mscores0.max()) - #print(mscores0) - - return scores, { - 'matches0': indices0[0], # use -1 for invalid match - 'matches1': indices1[0], # use -1 for invalid match - 'matching_scores0': mscores0[0], - 'matching_scores1': mscores1[0], - 'skip_train': False - } - - -if __name__ == '__main__': - from superpoint import SuperPoint - - config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': 1024 - }, - 'superglue': { - 'weights': 'outdoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2 - } - } - - """ - data = { - 'image0': torch.randn(1, 1, 512, 512), - 'image1': torch.randn(1, 1, 512, 512) - } - - superpoint = SuperPoint(config.get('superpoint', {})) - - output1 = superpoint({'image': data['image0']}) - output2 = superpoint({'image': data['image1']}) - - pred = {} - - pred = {**pred, **{k+'0': v for k, v in output1.items()}} - pred = {**pred, **{k+'1': v for k, v in output2.items()}} - - data = {**data, **pred} - - for k in data: - if isinstance(data[k], (list, tuple)): - data[k] = torch.stack(data[k]) - """ - - pred = { - 'keypoints0' : torch.randn(1, 1, 484, 2), - 'keypoints1' : torch.randn(1, 1, 484, 2), - 'descriptors0' : torch.randn(256, 1, 484), - 'descriptors1' : torch.randn(256, 1, 484), - 'scores0' : torch.randn(484, 1), - 'scores1' : torch.randn(484, 1), - 'image0' : torch.randn(1, 1, 512, 512), - 'image1' : torch.randn(1, 1, 512, 512), - # 'all_matches' : torch.randn(2, 1, 1248) - } - - superglue = SimpleSuperCATs(config.get('superglue', {})) - scores, output = superglue(pred) - - # loss = loss_superglue(scores, pred['all_matches'].permute(1, 2, 0)) - # print(loss) \ No newline at end of file diff --git "a/sjlee_backup/IMC\353\202\230\354\244\221\354\227\220.py" "b/sjlee_backup/IMC\353\202\230\354\244\221\354\227\220.py" deleted file mode 100644 index 81129e1..0000000 --- "a/sjlee_backup/IMC\353\202\230\354\244\221\354\227\220.py" +++ /dev/null @@ -1,221 +0,0 @@ - -import os -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F - -import numpy as np -from functools import partial - -from pydoc import source_synopsis -from sjlee_backup.superglue2 import SuperGlue, normalize_keypoints, arange_like, log_optimal_transport -from sjlee_backup.loss import loss_superglue - -sys.path.append(os.path.join(os.path.dirname(__file__), 'cats')) -from cats import TransformerAggregator - -def dfs_freeze(model): - for name, child in model.named_children(): - for param in child.parameters(): - param.requires_grad = False - - dfs_freeze(child) - -def softmax_with_temperature(x, beta=2., d = 1): - r'''SFNet: Learning Object-aware Semantic Flow (Lee et al.)''' - M, _ = x.max(dim=d, keepdim=True) - x = x - M # subtract maximum value for stability - exp_x = torch.exp(x/beta) - exp_x_sum = exp_x.sum(dim=d, keepdim=True) - return exp_x / exp_x_sum - -# positional embedding 필요한가? -# M * N 크기가 다 다른 문제 -class SimpleSuperCATs(SuperGlue): - def __init__(self, - config, - feature_size=32, - feature_proj_dim=128, - depth=4, - num_heads=4, - mlp_ratio=4, - ): - super().__init__(config) - - # freeze superglue's layers - dfs_freeze(self.kenc) - dfs_freeze(self.gnn) - dfs_freeze(self.final_proj) - - self.feature_size = feature_size - self.feature_proj_dim = feature_proj_dim - self.decoder_embed_dim = self.feature_size ** 2 - - self.decoder = TransformerAggregator( - img_size=self.feature_size, embed_dim=self.decoder_embed_dim, depth=depth, num_heads=num_heads, - mlp_ratio=mlp_ratio, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_hyperpixel=1 - ) - - def forward(self, data): - """Run SuperGlue on a pair of keypoints and descriptors""" - with torch.no_grad(): - - desc0, desc1 = data['descriptors0'], data['descriptors1'] - kpts0, kpts1 = data['keypoints0'], data['keypoints1'] - - - - desc0 = desc0.transpose(0,1) - desc1 = desc1.transpose(0,1) - kpts0 = torch.reshape(kpts0, (1, -1, 2)) - kpts1 = torch.reshape(kpts1, (1, -1, 2)) - - if kpts0.shape[1] == 0 or kpts1.shape[1] == 0: # no keypoints - shape0, shape1 = kpts0.shape[:-1], kpts1.shape[:-1] - return [], { - 'matches0': kpts0.new_full(shape0, -1, dtype=torch.int)[0], - 'matches1': kpts1.new_full(shape1, -1, dtype=torch.int)[0], - 'matching_scores0': kpts0.new_zeros(shape0)[0], - 'matching_scores1': kpts1.new_zeros(shape1)[0], - 'skip_train': True - } - - - - # Keypoint normalization. - kpts0 = normalize_keypoints(kpts0, data['image0'].shape) - kpts1 = normalize_keypoints(kpts1, data['image1'].shape) - - # Keypoint MLP encoder. - desc0 = desc0 + self.kenc(kpts0, torch.transpose(data['scores0'], 0, 1)) - desc1 = desc1 + self.kenc(kpts1, torch.transpose(data['scores1'], 0, 1)) - - - # Multi-layer Transformer network. - desc0, desc1 = self.gnn(desc0, desc1) - - # Final MLP projection. - mdesc0, mdesc1 = self.final_proj(desc0), self.final_proj(desc1) - - # Compute matching descriptor distance. - scores = torch.einsum('bdn,bdm->bnm', mdesc0, mdesc1) - scores = scores / self.config['descriptor_dim']**.5 - - - - b, m, n = scores.shape - max_keypoints = self.feature_size ** 2 - if m + n < max_keypoints *2: - p2d = (0, max_keypoints-n, 0, max_keypoints-m) - scores = F.pad(scores, p2d, 'constant', 0.).type(scores.dtype) - - - scores = self.decoder(scores[:, None, :, :]) - scores = scores[:, :m, :n] - - #print(scores) - thr = 80. - scores[scores<-thr] = -thr - scores[scores>thr] = thr - #print(scores) - scores = (softmax_with_temperature(scores)) - - - # Run the optimal transport. - ''' - scores = log_optimal_transport( - scores, self.bin_score, - iters=self.config['sinkhorn_iterations']) - scores[scores<-100.] = -100. - scores = scores[:, :-1, :-1].exp() - ''' - - #print(scores) - - #print(scores.min(), scores.max()) - #print(scores.exp().min(), scores.exp().max()) - - # Get the matches with score above "match_threshold". - max0, max1 = scores[:, :, :].max(2), scores[:, :, :].max(1) - indices0, indices1 = max0.indices, max1.indices - mutual0 = arange_like(indices0, 1)[None] == indices1.gather(1, indices0) - mutual1 = arange_like(indices1 , 1)[None] == indices0.gather(1, indices1) - zero = scores.new_tensor(0) - mscores0 = torch.where(mutual0, max0.values, zero) - mscores1 = torch.where(mutual1, mscores0.gather(1, indices1), zero) - valid0 = mutual0 & (mscores0 > self.config['match_threshold']) - valid1 = mutual1 & valid0.gather(1, indices1) - indices0 = torch.where(valid0, indices0, indices0.new_tensor(-1)) - indices1 = torch.where(valid1, indices1, indices1.new_tensor(-1)) - - #print(mscores0.min(), mscores0.max()) - #print(mscores0) - - return scores, { - 'matches0': indices0[0], # use -1 for invalid match - 'matches1': indices1[0], # use -1 for invalid match - 'matching_scores0': mscores0[0], - 'matching_scores1': mscores1[0], - 'skip_train': False - } - - -if __name__ == '__main__': - from superpoint import SuperPoint - - config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': 1024 - }, - 'superglue': { - 'weights': 'outdoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2 - } - } - - """ - data = { - 'image0': torch.randn(1, 1, 512, 512), - 'image1': torch.randn(1, 1, 512, 512) - } - - superpoint = SuperPoint(config.get('superpoint', {})) - - output1 = superpoint({'image': data['image0']}) - output2 = superpoint({'image': data['image1']}) - - pred = {} - - pred = {**pred, **{k+'0': v for k, v in output1.items()}} - pred = {**pred, **{k+'1': v for k, v in output2.items()}} - - data = {**data, **pred} - - for k in data: - if isinstance(data[k], (list, tuple)): - data[k] = torch.stack(data[k]) - """ - - pred = { - 'keypoints0' : torch.randn(1, 1, 484, 2), - 'keypoints1' : torch.randn(1, 1, 484, 2), - 'descriptors0' : torch.randn(256, 1, 484), - 'descriptors1' : torch.randn(256, 1, 484), - 'scores0' : torch.randn(484, 1), - 'scores1' : torch.randn(484, 1), - 'image0' : torch.randn(1, 1, 512, 512), - 'image1' : torch.randn(1, 1, 512, 512), - # 'all_matches' : torch.randn(2, 1, 1248) - } - - superglue = SimpleSuperCATs(config.get('superglue', {})) - scores, output = superglue(pred) - - # loss = loss_superglue(scores, pred['all_matches'].permute(1, 2, 0)) - # print(loss) \ No newline at end of file diff --git a/sjlee_backup/__pycache__/IMC.cpython-38.pyc b/sjlee_backup/__pycache__/IMC.cpython-38.pyc deleted file mode 100644 index a70c1a1f852bc4ff4d7e85ace0214dd6a652b255..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4395 zcmZu!TaO$^74GWG+;(Pnc4qJEY$nFJj70WgC4>O6bHfHHaZGHKXk^stovNAb-M);g zdt&d>1Ci_~4`7j3kR@isQ(h1+`~@WbLOpSlU>|tl1(XQEcWN$P8@kn}PMveA`qa5p zo%(*IQZn#dzy8|(FV7jqpQ&^9si5;ZzT{(O7~J42Fk0i5wV3XkEmOBvXt!)O<~gBl z8q5fDt^9;iXcZx42BmPWHK%D-P!21tipK3=K3r%mgw<9xTx>0dwN_2jouD2rwU(G6 z*x~YUMHqmq8m@`;5#za$IbvEeA8dr1txezsUc7I#&hck>nOE+!)>BsvKF=5K8+<`5 z+_PF+;%QzzGCyN7gGP%%BYdXy493-XoiFj_hZbMqt9*^G1KW6Lw$2N4_!T~fcaxtR z+5e~a6!Auu=A6Yh`P|6V)JGW2v&Fa0%%gk!>4$~Z1+gX051-`?-x!(qSnE0QoVak0 z-7;Ixw_f1vhS5BKO1`<)WGdhDWa@hX(1j%K%eHWn<0$SWe$up6@kYNVn3*|bx?dVt zoMo14-d;i?MjcU z67M8#M+$KwFwhBOE&^rKc;)Ev%F%;AH)I_ueQ|Z?AP&V&`=h|y-+gtbC*woWPLrKP z^rP-!KXScjr{hPk%FG5AcapbDzLg7{A3g?EH# zh_vAa+s&IzQ(4>(RsKkb9uQd13cb6^A-5hVOWf@#E9r;Lykd8iy)RNvF|OFVin+=G z*S)KXx`CxaZQLk<m=u>zaND=&c7#`oCc zOnS+4K!iaX4>p~`r$&cyn>+W5EsN)P9%*lX3Y)eZn4_o`Zuns@5IPm!dgEqt+Q8`3 zj|q_7+vI@hL--s%a!?Whq{cmC$TCLTOA6y3S`?WYvuBt?SO%_}%Phb=(V@Snf?%_XFnC3=Zm z0kv6B1wZV%36dCg4)>7{RIVLI9ltBTP9t6=@FIb2 z0y_YoGgWlmcHm)Cav#N)dn?Oi+y@P#-xJVHXBjYpnvp>P-jI(uu<~n8_AX$N1nqq5;jWjif+RGyo zyPY+|EzDzOHqV1@YuwSeli7zh&#fEWIWm(BNj~eze`p>KwomXs&6(i%cW#jH8sJle z6;q18(gTY-TF%MxPtfdPKFi_Y)O_IiF^)s2{Ltx`A7C{v8{aeDH-3dJC8fw^ys~Z_ z+V@D?0^*qQ`AHuu#I9bt3#2=aWNT!u!X}gv;5rj-^)W8-N*E7WY;|NmV>p-L zFxRa$g0}{R0oF4t^Tn)?6-U;fl$DT86<)(k^}4ZX5arOmWew&AQq`&*C4T~Exo#K9NDiMgY|4VTc66HzoEyiXSJ+`nTsQa zxoVoWk=3Eqlm_cTTZ*;1hN~K`5k!HdBMv+D`pHZ0_oIg1r>SIV#8Ja*^gLfS;!a~$ zv^2bkH@HaJ((mD1N>uqwXZNw*o+{0T?ViE+B%Q+JJ=N09>3{IZCaBls6+mpe4o|T^ z-fTLm5PE6*KqR{}Y)@4+4Tr0nv}05ryZ=MM&yyp+N0pQxsT``RuGpQldn!L2HbM4I z%KJR}>OK!&ytE&MYW~rpXh-crCp>?+taVfcY>`QdV|+r#uDYNVa?=ASa}WpoO7ong zgH`3Dcw810CpBFnRGx&q9_pcSQ;5VJx6=;-WuY#_NrLW)kZ~gE$dxY>_!@y%XvUoI zBRbzN6NPqZ#jYr;hsw()lgX<@e~kd0;&PoN@_xkqwn$`-dO1{z2RMdG^rKWnNi3Cf z2Na=lA9?V7(%h6(3n`m>C^B%uCGFjDv!|S+UJ8TjW>4kx5@1=~q92Zh$1U--gB06W zEL1E~6{G-i9&89DVG{A7P}bOityt2!iR!Oy#hAodDH6V1Z58Q`-!Se zJ^CmGP+J<5-@O0^MKyoSxrb#_0SV;dc*k3#uo6baW8Q$n|lZpsFcKWGyw=qV3o4}g{sAQ1u5x7p^T>{q#{1l*R z%4NcJVb@e$l)P5OBO>aiIvZ$U~qsOfsgD6e{| zsCjW5B;%6H=^gX9Em6G~YW3{RWqY=yixM_9#`#V;D)S_~@t82(17`yyU2>G&Lnf=7 z*XxN0X>zOri-Hn<8qLKV7y0AT*x02_z8jg1ZaCv4z8Fy z!f&MDs6$1njVD9m<>1*~KRuIRI@2Kv9gWY5OtxQo3vl#o~5e@B_UOk!2gviSJ{67#`~r| diff --git a/sjlee_backup/__pycache__/IMC_backup.cpython-38.pyc b/sjlee_backup/__pycache__/IMC_backup.cpython-38.pyc deleted file mode 100644 index 6d0a596a98d61d7402475f9db5588d0a5edade53..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4053 zcmZu!&2J<}74Pb=`S8qmJTtb(_O9c6Y?6UwZHEX667sDL2v{MqNT?Omw5MvuZFl$d zR`=}2rUxNyqMTOZ3fjZWfkQ5c3;zNV5@!x|3WAFdoI#2B_`Mp>tP|)~y?S3&{i^D{ zdar*}uU8E`KmYWVqu+E5SmI<}pp1o~e5)wmqB8Wln6H z1~X#UD_wHRUIkiaRE<}>70t7vT3q+)8n>fHyy~sSO|KcRd24aYYiYg{wd0Q0VTNEQ z>(dQk0B&m76J2ijF zq=X&y=!tmG+k`szUe*dJ;&L5MsN3m zPULovsZtuqJPabBtJ!EQ`@+x8l2Mw4SNxYk7OTiv3zV`)ZRwRUIHkLsWCI&F@EyQ`{smA%)*>9O;;xtXNBdPce}Y&VA;&L zTjmB9Yu>(vr8JTt7ToV2hY^<|QB^D)oLtCE)up&Q4yDNap$xdH@5-+$Xb3i_wx^k#Y{8<>u!%*p@;YqAGp+*uihCx{QK{x|M&VyBF&FgB^k&5u?TpkDshzhGRVV`SH63C&p!%-%&U?&`%!Qvq}R~8 zXYkH6{6Iv#hH|v8sOiWh!9+`0)2C34eZm*>5Ohg0a_D9Je97;N5`5b{=0#6Xw z1NelgitqQM00HJ}Z-ZylPfIRweA}IrAhqFBm7JY8y!z1Mj@Gh^ z(wBJl)Gb_G_F4|SG{^Z0uRTJX9f)JUhOLx$6yRP(mB zv7_zWfSsEU=XO`8>ocRMPB&&o*O+e3F0EhAj*|oqNb^kBc?UbO{0i~EAjO$sf0~oE zqIuFS)=s*hT9bCZRkXPK0J=Eu_H>)OMQdiBnv)JB?WLrpC2cKPhfO4`-7VwPe#4k-73;;;LI?D=9tTT`7G|!@80Kne-geQ3 zRSO>NR7-naTfnN4pntI?BEK1DCs(t6xuyYcNhP`Be%e~Ex$%=hFdkq{dlM6sB#?S z{bP|GEU`n?)VwgkZQLIriyr(B2OrN)!<4FmFi|clgrPXN>)@Vk|DF}tgK2T4N}nauAGSb{$LzM%0k71yaMuE$WbOK0n4WeJVW3H zfO18cWaC)AKpeWW6}zRZ6g7oS8z^5U{#OW4a#fY1Fq!X}vf|(a0PKz>K1fw5OnBHA znRH3dMPYO-q^i=_bCF~tshs=Z2<6@jP*P>R9r;z#w0V#RlmxT>!Mrh2X+lDqw=t9;Wudhe%9EMB%?3J2rS=+w^#L@@dK*Q zwQ0kjb$WdtRBNOj+4kCf@W_|6AD+BRSV7<(0lGSrgC8TF zU{)zODo&B>jgUVjnTw-I$N7~wc^oX|SLgor8dYuCEr2q9Aj*i27C}T!7jPG?Txkxf etY#x3Q2*KPW&fFbqk*`f*rJLN_&?oRll>P4G)sQ~ diff --git a/sjlee_backup/__pycache__/IMCcopy.cpython-38.pyc b/sjlee_backup/__pycache__/IMCcopy.cpython-38.pyc deleted file mode 100644 index 4adcea9b9b444c94f98a22202cd5d22c968cfacf..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4308 zcmZu!&2JmW72nwpa!HXAMe19Q!Z^+c(b$sG6lv1bcAD01g2X9mw}(ZF#fmeeR@z;z zXP2?1E{g!R(UXB*Tlhd;3glAs(tn{q|AGQ@YS99P4n4O>(2w66O0=AI7yIVTd-L(; zeaw6Nqk6q+;Q8SCxAy+HVHkg*&iSW~&UJj*XUs6T!C7Q@)0TNm_f5~#trgpz&89pj zwoQW>amg#6Qe3ZsQD#()=e#*hv!YsD_v#wAqxpEjTZkK8BVP0t)<&}qqcY(jaYrOuDdC%Q6_&i^DXz&HG z@WAr6#PhsyX#Rr90un7o?f3=n1{34&jyU8z1 z?Elkyj(8JGbI#+Nd~RZD>Qju?+2UL0*3muw{1ex^B(}uG@k_iN8WZyY^IjG&i%SpK zC#Ls`_bO+%jn2grvdygyQ{^;}c^E`M7qY=nc7>lEC4)2zvyQDQw}+{aH>06Ytz;nM zAPSF#e<+U9L73#3nhRu*^o1XVhe9n!gT6mV^AN-Q9E>a-$Q%pLM}sW$Gg_)oOWEBZ z&p=~(;`7#kDgyT(s0ePWK-$Slk;)BWnm z;;gWI^X^)16*x9C?z-H-Va;1NaFqEZhz0k%`(ebTNK_RE2PGFWQ*|ls4nrw2zb^x> zs=Sx^Jt@Soz(gk=a1kk+<|{|fSB{?ixglE+>5K1e?+;?J-TgEQ_IBRdPUYZ0bn|RG z6T_r`FiiX)*`6u#a(YCE+~3X)A|bZl|LDE$AU(R09?5yEMeAiQfWaE9##~cV|3(Bn z*H1_{T}yYSpp0gwpsVN<#(`0ooN@E92^Co*yD$oC%@`Ru8=Jt{rZHh7r*ICSrs=p* zsc?$YCMnB0e)IN?TOxnE{h-j}-Gg2VDQY zSJ4f8CDy8)f+ZYFt!0B=j^+G&VZQI@B8E2w`A`bnHb$l|AG66^Hjh_c1g(qjna4Tp zWiJB}23N!wHMJ?P8<1`Z5e($||Cp&3l_Q_`i zNbg-Tz}Z9i96mBomH_0&17pkzMwd$tlQYCd;R;}*9co7~~0$1cXzwWpVFotW75PXF=We>eWe?|*w0L1u|GlPjQh zT;+yw-_H=jsv_b&L<5=2;UdlVRV5k5{=Nu!rYdoi`cQ5-@Z7tHpZa@Yka<;#e?JP2 zg!Jb1*dtgV4eyJnGp`&S-OBa-FbQ+t*KVuqLy>fq)lU#^y)ePGK(V1}J)tc(6{T_3 zBOXcQEppLC%t?PHq$K-FI-4@FSmi?!;}!%`<rt%0sAp*Q*e*(x_=pd@JjHpH&pm

|EnG#2znTtp=_{ z>@i*}s=Un0k4%DR?IW`X+vpK9Zc(E#MIAg?Z&b~jTE>=^a|Lp)KGt&2R)NKU15U@E z-)Fkplrvu9E$Cvl&p8>e54eujP-C=MG!B-F#e-$&t2t`rD@Bt#53mO;+8VENr)W;h zLvyqQPHV<#YEDaYmW%3m4W$notw7uB#S&KB7;jFDVg)*1)wJbe^=yWwu3^-wKI7WD zacEyRM(f3Lv3@oN{S7^Dy=WFqtnN-2)@o|nM$v*~XS5dd)ztpi(r{Hn$OGMJFYKAP zf7g%S_~9^V&$4HGkhFt#8icYv=(T5sNIOV)n~SU~!xY&jQ?+xQoo9Nxsydsta}M9t zm!zZwd+X#scyM<*JjW$>qvI$y4)X54$aZGfu4-r+GNhk%2PhtP{)d9SvcoW?Vkk^h z358K#?3}i{s(d!>6xltl?eXNbJsw}ax|hUi{^=R$TI^5nls{dQuK-gn?zt>SE^9j4VfLnV%NQH6}anJ7!qexONRTfG+ zqyx~7g&bs(@}}G&PUW7N4UI``OzL2U)+vg(522wfqKoP3+X@KG}lXRtaHsv>IxXl9; z5y)U!cW2t{D(5iGVGG^ts**k;l&)L!HLMkDo;A%Hl2sG@HS;3da41dL%>8%C-Y9(s z_0R_1D!3~c(?Gj~5tP{IHNQMw{c0*Zm7(LvOAtU(T9i~YJq4W-<$5ZKeP305A2sqY zBE07NccF?YM<-QzA2U_`H0O3j6rJT=Dt6`D1gOH3F9Ohh85P2=h5x?k`^Ega=FyHZ z0?GH`)Kz^Y{D6K#v{e0{F8$`_pO5ovA(g`&3mz0_C}Max%vAHto=?4BM%YA^2g95X z2J(ai=>O`M^!(u(W|!}Yu)m*Y@*^ar&JJ|B5?9$K(kFHbSIz z)LN1ZD6bKChrqW9yiec*0yhbKpTHx4jwzQ3*JWBqnS%_4@=-PoM^~56SCT}217p0_ zS@%kBm~<(|g9z2t43|H~DCM9+OOH@2rK+NNgF%!{3n(W&;)AY4En;{L{S$I!c9z#A z)wI)yr+CyM>s6LM)W}7YSDCh2#s83ZH4^Mi$jY zxi+LcqV7WiKO;alhjQ>Q$P+9r1xLv!a;+J;PrMS$ln(QAv7%i^{};ILOnvVvm0cOB z)WvU;i#TZ#L}aW1ZsBvw)oLTHvYL(fKnZ7;PTMa^FVSBE#TFHP!2emQHQ0Xva#e)v diff --git a/sjlee_backup/__pycache__/IMCsuperglue.cpython-38.pyc b/sjlee_backup/__pycache__/IMCsuperglue.cpython-38.pyc deleted file mode 100644 index 1cee6022a6d36f2c7a5fe4aea192dc0712ccd9e3..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4219 zcmZ`+TaP1074GVb-S*fXkLSL#nK;YlVu{QpN(ce8yCGo%QZfnIMFA_&>T#7lJ<}I& zbvwH=X`@KHAx{f=1?__^p7Mea5`O`SzfeygkkCBv!V4%7m+zFv-q{7Z)u&FKQ&nAc zPMtbmeXm+A8+d;3;g$X0tQ*FksB-?PqH+yi`Vlh>Zg3VF?McbnOxMk}sY@%e+cul% zoX9o}W<-T{@l4{jOK4?=Hdb=Jiv=^dAyP@SyxEL+9 zmzW{g;qquj7@(_~u8H+Ag&3S%<&y7tjeaO)`n|$-!IJ(B4eBib(i%oH9^fYe<#@M{e+Run* z#O1r}eY5>+`#H{T7_CdEmQsuTNN}%n5QJQcSe3DGNHQT)Rh8n_Adn*U zdeY~r%)6=Al|q~dbabMGi%{9rUpczJa&+g<47rFYeZIGSkVIm;^FiqE@4T|zm&u{% zWa)M)2669j5PN>SJzKq(i8R|z4?`imeZO-w=x=}Lt=DGit^TpBVNe=4bwLbPXBFm} zTKX3f;JJ26M(S9)G9g72GfQ1TB{vR@+~ka#_f6Q!8rr#$TWiM9$k@mP&o+!P8#=jj z2zyQ14GX!G7dFUh*2&8^uJ4KLYt1)>FXJHYHQ(N+bv^Ij_N8dv5Ro5eL8o~=Ol~)y zdsB#}$eMn*)w3lMi_Md;OucfF}jS~+__h3TfD%Fh<^JM*tG4y93?e>BZ&H;&=K+4t2fir zCVHQKM1<_#AqPw!;^**@gVGozGwvEAmNPnKG8q5Rq7AA!d&W7$<(JT4lCT^_I;B(yl~$|yQ<#u#l2G#f4$ke|JUD+{{FjPUqQ54 zqMynYNL#LQgQ(}Fh-KJ0+DBNByBse1*?}s>gUCA&K2KFC3i}?68zjoTb@YL^ANZ*% z(?Q^c{;`k*4uVg5vq?(wucWmp6N8oBq-pG7LaKPy7MsdWMc7q^P7-&6p8P8H zc!kLGM7D@*gM7wR$@4m)kF&{pxWD9!&^?VWy#!*o6|2nfngv#171Lxkc7$n|44+Y9 z=08u?9@`gN)1`N4n$w32PjlA?nSv(6ei$2{vY-7FK4+Y*6E)W{>JY&S-qI3VOKgOx zInrJpn>h5W8E;`6E4O(Oa$EC`=AGO=w0U9O;Leem<_PjxP5w>mkl#MW|Fq_e$G;21 zV$Xn{c33H+{a3zkaYwgv^5SDMdsNH|$e&sdzBu8Ln<@{SuK6BT^OEsh<6YyII8rj& zxr|rWjYIn`&DO;}X1sP*#|m+{mvUN#3HdzD9huo0o2#%1MFh0YnYR9jmULzGM_U|q zWItmxmm{a^(i+2CLwAVvj4FH~ck|NN8kX}iqN&Oo7-@0Q*f56W_s(YfeIu)B?YwTW zgci&9bq}Bd_hXzDWV;FN!cU_*U%|i1|5pneVFq)=6QyE#Ms=Zy)w4v!D zQ6L@dbDWjyCojG`h@1MPB<#>k;-=s1`+;mG-R2BEH2s)2xkx)Q=p!?vs&cNf^GI!1 zm1o^{&hfjF^5EoDvvhI#Z#=RE>7Kj}ii6ML8BVC1El0VLpLGsIx-;W;Rb9)F-n_Jv z00Qj%mjpjgkAglynjlsMz?+`fIV*QnaoX*S**&T3^Z3jAJbM1hejKUV!$r}V*2A3e z{NbuvOoRC7Ad#^b0EEyX8xtsUsFvy#sChj%FFU}z4w8^xZ>>qna;g|76R1);!&#|N zMQY^tfl(%f8;jdscMyik0#?JZj_Qe!Nh&G-$(M+Hg~-b^VnGBkCEKecq4cFn`$0Uh zi3Fg>KZBSD9g)g{ZV#+>fDBNkVKWh@iB!&QNQ5eU;KS*u+(5h5iX<$eZ0;k;Bg3Sf zok_8)oTGjQbL(PP74)oOS6!kWO&U*1(rfo98>xRs(_N&o$&}#FGYu_2@`Xdm-tN>ax>MtxdLR)vXn)#x52pPuR@;SHbxv;K&8MWwc&l4WK&ob4+BF$!mUbvif)?Is1Rh zk%Uy_vqYYIgtQ8jY6+Tpo+^7Du<9TrzT$beV1|iC=QMc-JyrEA<#j|DPSYM?O?iz7 z0W3+$k8o2;IMlUp|C+p?*VeU;{um(?JP+HotfTxLx(?{6^*&ts)f<02$*u;3Mq3s( zNR$FY@L-Ut#x#f@ZfR8|C^wa-s+bHio+R>=rqcVx&*}O7Rg6=-ErQ-bmdbabkT*d9 zGs_xn=*x&IX`U_^!Jly|=mG&9W4Gd1ejRPvi__{>cMx|dj{OiY=ONFoD+fTSe+Cla3^)*j_z&_YtG2;P?9? z=E^>VlK_(fsc$lHVmF|zB1<%)##Q8S4Vkr{1{e^b2^-#_WHD@Ui*MjV z(FX_I1u^l&(1%Bu?t0$&Z^qmf*WAuJevc1**4&ue**~`EP}zksbUY@`vyu09Mw3iP zryUz;o<5@Zq>E%Yus$MB%=9*Xozw867$mp%X3mU%>;4bP{M1FFu2m@=Zz_`&E^1V1 zE91hfmBMi&%3QfXtCCEWyKtjK>9}n~wssSvY$0kjpSZ|o6(M&fv`#mo$qE9YuB>%< z$2oif(^d^79$W@kU2pPvHBm)pz59Rv97!Ld*d-jSLMG7=;fFbBzG z@Dt`J@e|C#JyqZ;^Cur~P*k(lRXfQ5KL-vlS@B`|!!C-?-lFVQ+Uxgj2= XIx^3w;Kpw^m23)i>6d5x>c;FJV%D7q diff --git a/sjlee_backup/__pycache__/superglue.cpython-38.pyc b/sjlee_backup/__pycache__/superglue.cpython-38.pyc deleted file mode 100644 index 3acd4d8029e73db3e4b5de3d5e0d5f1e03dbdc36..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 10052 zcmbtaTaX)9TJF0pqtVEk@r=jU-XvtBwLKm=mnFd@&c=z8jWcU(Cj}xw^yr+CS|fGK zr(3>=RxP{ZRKkNpmW73-{2;vmd7+?+Ctlbp;DOp#9;kkT+N$01164eSBIf%~x1^En zEU7|MefnIxPyd(iznpn@aWSvpck$<&+dq9?QGP~^{l5$v=Wzx9rYZ_knA%agYE4zC zu64AoUeje=?-(@$eMZOVnl%%B&vT1e_f(c)**j(}!*VQtN2z5mDr|uj?kKFl&30~2 zujSbyD`I2;BTKA=ky1KRK+geIMo&5ISwzn=TS3oC+Ee69>>%E<%GT}}wNlzT#MaR| zfYxD*9bre&Dr4<2b{uPur)!t_3Oli@eX5FoWeT%92fM4aRqXt0>?HE7Cs_g6?g&!FmA1fD>J@rSL?0r`D9-Zr5=f^O*&ety}ADpV4;5J|9 zC)O2i@vrS@d-N@zU|&xpy!0vtxwGfeoju8)1ztiu8?&*$mn+v$lwlT}$betkhq#jaj=ruOQljjmy+?>AnZb)T5zXm<;Px)R-Sb^9 z4D62Ia6;Gj0(;vDIJ3bcx9c={Qr3f(13J)WY~3e;BN5E(LOhwmm07U&LnUi$*Ip!Kub>taqJTv30`%DT7KbHgFTghGmj)c|9;h%-S0Tlbohg-LM``-zvm$w$?vA9WrBK>>vV_T<&WYF1rQ#nWgv zsB`~MPaDBAsBopGGNlI5ujwBtJ5nL;WPzw4J+RUINHLU}1!7w9(%UHoU-ue5<6>~m z{x%WR=G4YMZGXqU;Rb!DW0N2|5b6fE9S;bXQrzwbG!LQ`G$z}i-B6u`p6>3ROAne zW=ipnv-P^`xnaF7r8myV&2y3HoIyizt^wJA%7t}70$tI;7v9;47k0Q4_64s)5JVXx zpQ&-KUT<`qAi(oT(F8F_oI-I4S3p9ffE}yq;LtVcJ-W zv=KmUtTv%cP^ARWNJ;d&P#^i>QfS2*mSJZ)b_prXg)N$8qYgf`%`65L>R->xaR^!k5|c#Z5=B-@x2}wyV(Btv&p8>8whI2<|yu zUa!abdcEtjeuwIXdi^>CB^}As>&$O}hD15>YbfHyl&}rn=}d?#DSVcuNHTvNwQu8+ zWF~+rn1cG|C`!~Mczlf}UO)k%0Yw2eFcKpG31CGX>ygg%Wd+b%r1CYk8+obWgv$-SWDr!2SCxqJme6=TKk{1}1Yb9jas|7$X^A zan}p_-9!NMJAJP~*yMDe{{Bt~B%%3Wn*b(5-D6YYih$3gh!;`B2GD%_VK(KQ5YR+V zdJl7gEQ&-B%c>xnHK=3%PqGpn0ZGDsRN(M8n08N-=%6#>P63w2WERlDlIV~jdc-Tz z`n|~=#@VZrcMo1jjbaP^_H;{j=-aN>5FGGpLnL^|=C^tfy1>2RhVE^uWK zAkHw78{V1YE9p4#mF~gspqN2Mq($l~c)Y6xhePfA0FOvLtb7dEh*XqDXtpi%krC1F z>gdUY<|r$_k5Dfg=9smol0P)2`FRN|;5gZF$@YpWh@cJXnEc365Dk}cNt@Y(qum6UNUmoAyEyAel{MmCUyBbi9w= zwWp52c05V$iYa3wVLv2?fPqZP4VT}XQ5>|qjDc9~#Of_bL~h#&8!c&RBs5PYLR`T# z7gr(*s`X&7`B-A62>e({kb77VPtqIpf!h=Bq9$=cyhXJF6$$5I3q!Zr_xph$B-yuP zE8+ed)O(SNH>vndD&#}tmVb+y60HfO1KY>{<1HqWW_nG5Wh~v z%T&CABG%!0%G(kYFbJ*Av{eTxh zzsLDTt5e8xI+umtbDGk_grbuVEl5KiK`(lr6l$W|481}iQ>5;}E|HrP!REtFPRph) z-5)cw+L;j?;pWK}LBOQRm{*VVz$m# zwA9bh6Wc3|D?Aqh81cJQkU15A(_hMO-xLknW8#4 z{v}^A^=TfKM-s2EB8p_DdXrbdLAM0;d}TpI>^diiX6mHR%&-MWpV(ffb8%}c6~K5w zMugH4d(`tLCP+bi0hRrNI7&hSkRJF7vq`u%F^mvTGNLe^(md*0a{x)^jjZ!ytsztP zvs0p53UC4i@C4+baaKXNKwkHb2k$JF2@l75X|MOIAY zsVJj}O>#toIjLtfC}CuWYEo86>W8=jT1gmI)bd~C^}*VgY-mp2Nw0_|ui*;FkWs>- zjc}%&Br4v6`9PQqQ=+%^x{Q-cx;iW9QO?pIrww<6RUjh#1wb-@ z7rCT!$WUMmpoGlD2n-q{B9(5Wb_g>gEgds3hSeC8;Tguv>&LNX+CN)Of!p9E`!W~! z{;qx1^>$mn@TQ3iX-D8K+n4<2>7eHzqQ*oU@#(hPal_lQm@ffDGYRYR2=e@(-f@w0 zB3X}f9lwb%gZ%7oq79E-tszn5H|yPga`*ber8jLxac{Q&d=!IjZ5}bR528lzC*l7%kF#xep52i~C#ng2iDm%tQ;(VpIf{ z8pEY%DJnoZaF=p$VJZg~+?^dRWP)$I!|m_%LZpv6_O&FK7vcXY5fzYo3*h%H+{HSI zSPT8w0{@YanVCxDAN{6eR;84Llcv+@wVYHGwqvy`4$vc>kdNpFNXiB^a~m=1W^$|6 zkT2=?C{YyvA|S*fparB(LQo;0ToweP@j;RzKlz|~nw=2gk1-THg(68{9oLG;Hx{+B zT700X+5q&kx~AUz7gSj78diuR$f45|uZ!;6^3 zb9$VmWFAs*fh1ULHTqp>3(TkVXP>9nJ@gq1l;3J{sbYVF@qpg0sDc2G5NGa3f=|DI zV(*D>NjWzH4>IG-Vp`LnGYbf24&g_hPAl$NHABXwbW)K;+%3;3_Ql-?;@nlr+FnGe z(fLp8VA`_NWR=~eK!UPvU6-*AKaB))-p22udxLv+c+XehCN9K*)ms1Z+~Wzv`hf)4q)4OG)=0RD@D=GTA6?+-Yax z2iZw7Jbl))8}=J6vMY9)0>^8T3F-KA8;{YoOERVZjAOHTt$Y*7b4R$2HzTssY5D^D z>P|NQ{Dyt41wys_j?9sQ9m2ge%RG1cAtg-ttwskQjr7ND`4LX?$a*+gEBAM;#qD{z z+b#=`Rps{SYLZo9E)sg&_I&U3l=BH!H{>HOJN?e-*O}i(4$r>o2}!742?q8%$UX;%Qr*Vjl-$&-Kn{LWKIRI{kLTZT8?6m{DswN{myq@Kh}~}_ zsZu;6l|IR={cMzvNm&_e*l&6EYwumYa>_otd3JNdZiQhlc%ZdzVNJEeINN@IQXu|d|%wRp<4IPQ}O#KlHjyOO>(^MGZer0^B>diUtU2< ztw2ZRh4jFe3qyyEtj7=fj0|c2o!# zMvE*n2J9g(p^SQrz~aOMw3B{UtT#PxmQITEJEVPe(!Rt|$z2AHb_rc#D?Lmp z3*LmY$3?w28Jl?KKf<2GA5$Sc^Qlphj^R(ypO_HJE@+yXRk9B(eR4Ob4UbOyB5NQ8 zRi29NG=7-l+bW(m^;bH8Rtmzw9T<>xa;_;25ARO);4Crm6e9!4WfPpRkbFEJnX{Jo z3poaW{}AitNLt5A3nyL%%#_8l6!Nm%U7cBS7QBpy`rt!G7R;C&M;+XyIvI?6=2Znz z8YK-1I2D1UAf1go7ut(y8!;T@fV?b~6=4){t^&!R0|iQOt86Llqn-@r;#>t`Npj>s zvV(91i3eoHuOs8W%SYjL;k^sPLbT8>Mum0}E%K?B;CCSyVj0k^I4UtSS{!S;+HeUY z#p%eR94X3?Qj{MZV4&mh0KTCNKWw4BJX#qm(E)r(S@xBp@??eVUB;}k{F2pmW!E^b z3|FF3v@)54_CdLB1^(S4o}Pu?>F^op`3K?g6{mgVClddPvMkGTS#ohL;bP^`;JGV( zZw9S&Na@%;#}$uS>KT#+abaI$bFNj5^AltXHuu%5Nd!U8&{mS1M5x5V+dX~_$46NS zZ5%5mwQ5|N7?uPUF!pc{-WBY+J@W2c97oHO&&~8+rE^U>VcMKjs)NEdqf-<*MDe-_ zD7{Kb{zp_$sGQ`qeu&x|m4k_kCOx*Dey5W-voc7`kryW;A9?gC{*;#f85QJS3i1nu zgplu1P2$K=)Zz?Ghzlo1I&ZOsRf>dAruMP9W-%?#~@-WW${SYVJ zz@?;emQJ@gllh`DVr}BvYGO7xlmpy=89`Yqx@ZE5iW~CVy^vP$^k!4G^oJ8I#_Bw4?yX-8_S?+Mj z8S2bX7kXGESp{|<6gX{Aw5i&XXxhRrMf=dF_O%ZM`UeyPv}lr|fgTF?>RFg%SZ_SC^T~p*m;kI2X ziK3Xht<~l)Yoa9PZ);+ndhD9Dvdpy>rnfW6Z5c;3QNC-51+jSBsx9KZ1EPZWD&zMK zpyr@hLe0{+rh=M7Vi`5dcKJaqO-x zj*BNyTE@7pcoO42IUe`0JR+XjF+MTW389J4wU2gIYAaa%)8g~u8SyMueL_^lbK-gA zPTn2~tC2d##LxXEy;#zlK?J@ZzT93(Nm$VN~);=enmdE7h zK&j8mXSR%8;}c!|81s5Is~zd|Jg0r4U;D8<%rpDqcxET$33&`NstW5}t@`pk;-jjb z=7p5qM%cZDymi%!x2r~KUGbwhwKsa*wp>TO5d<4mGtD=)y&#b7D78gc-waYm$+)M2 z2AFSp*~k@eU&I|HNMbF)0tZG@8!^aPW}QfdP$F^5KMwyZ=;ETJFKj-qE`fqisg z2=k6L0wX6@Vo=+Trj`?3ZD#Kb?X2fW=RQ2n&>3k|y0(llbBXy~4a>CpN8Go2?btun z_HIe#cBN{_ZmhzTUORXy_iGPO2+FRJEKvn_+vR$kDQspECKG7!qa z3#oH446d(&YPYy>``BB%3Nvujtq*{(O|`{->Hc zg36y=IlUcr!REhq+ zH8AiqK^$!eLX7l*9qU6Qu@h@aLuxK*O=AQC9n}U-;t2CGZIDZxmI%Z*Mc6x2l!QGL!x}hJ$k=qU%UhIcKqpDuiLyq|9bjZjF{Kj(gh zw%3(Z#;o0N%f0SLJ+JK&w|n6HhIG9^O!wdIMbr7>9&Z5djqpueOdmB3aQuu4TP{M5#lqDH7>lsbOU?Zwf9%$eH?)eTRHchNUuR-@G&esFc; zUXTuKbl`Ply`C29^-d^yZOWJG^=n|AaZ9dV7hwa{COAydsJw^j0!?Usyu}UKZcnzE z_xg*}k2m}za#wKkhLb8=u@%*QYNa+-tf1~7NvX{u`!L(lj*}I-NJCsg0*1n_Lt(-G z455smn)H#Gn8G}$L48@V-hvfsLB(Y%EK_;Hfx0u$&)zafJ*_i6`U{(4J2;7?bL#x& zcF-9!hDjfEuR7pU)>BL*NBTk3>txJX-0B4l(r;cHDkR*Bp^d7J zdW#18G9_Q3grrBEN0M5w6q^r{vEao}Z8XYnqEAFtCu7u#uD*tRgDUo3v_Ew5{yd;n zEMeR=Sf7}}fC6t7L@(|>7ff45QZPN(U{ zq3Z_?C85$?u#s%L@@5x|8u{1#*uUk%3n>J>PMuzh(wrb(lkI)dMf!l&LC@&7k<8Rf zVkG)9Xue}aE3xryD4IlH(LRQXNpz%EY_}ZLkzq3Ln5fCc_7H0s>(G2$5YDbnj@gL% z7g+^?1IT4yHmK@~)OUj_CKp*BV)lc$*$y|sc^{MqdW=BYmeIRsUliEM~1T2a9A{el3cmBF1?`sZeEnM#_MhQyrk(dq~pa+brwk7^(%TAUn%_l)HSzE@-%n#_ss$PPe?zgPxZUz`^LZ;*dOadE3t<5NEVixKe+8h=+Gg5DU3XCkzbmC+q6LmGnY&UIhj zm~AyS@i)+r>g`m&305dWC5N-LD9@p zv+sA+`^d497!w+1GWMIjuotNhsEJh%5g1yD=kpfT&Qw#Kip08^Qu-aru^PXCoR51- z`;xwRzho@x{l|aVtgA)!8jbNfB^N2VM9CYJkSWd#owLEN9NQH2CROt`aWb5 zo1Lb@KBtM@hs-ja!N}O#sqFJcq!v_ikU33er`^rOo+1hL1xhkcDWm&z_L-2+*pjJ& z-c!3%(vRrwF?^vdys1cYGUKQ(Avf`mn02>Nk>)xuJqRlHaM6^9Z=wTBoQ9aHfBZ9+ zHFc35uAQAt-szS16^1&?Mi^f)TWtT+KQ zdA@ZtebA6x6B}++*WtF>(%wEGPv|VGOVDAyjK+n>4sjnk} zHD#`y@hYj%Z{p5WsA1giKfiyeQ~rFE_N?aj$PcYl{T3x;5LKBHnq6v>9~@0=BTs;| zb|y9*$}w+|*!(W;h=Nyg*UCom7e%vw_%kL1%V7%{HO^#@wTJhgA~4#hi?}1=d%|8? z2(4Nf=CTeeMWH>=mK{SqN4mw(2KK-SiX>NAJ|T&+p>s^MJG)eyLpZM|IRtA)l0OXJ zgKXWkGQGpTv9X@o4HZVTW-`rLJQyHXEo8bQq}nc$)MgG!E!rbQU<%8jbS-t@-^JdU z@pJ7_lK34A5Yawsh*I);$)H$lN$>B~xZu{+i7b{LDT~A;sR=)3W==Axq^1 z0q(JAuH+QeATZNwz<9!Dz0=E{K`*s^zyf0_p>Iwu@?z*_D#gge2#OJrz)!>z(I55~ z-U)O+G(3>9a{yY<22BS5W(eAi5HPdnY8*kB|(Xv5oR<0uks_O;`9XIejdo8rs< zEACaPuvH!RT|d~_4plJ4U)W&5L3gi&%~Mg=Ye*LzU4+k@e%p_4%|g>m%i5VyE+Q5U zqk7v1qDC^27O*9VbjYHaUmgQxfqJ|ZMz09-MP z%dzKH0w4>}7f?k+g46s&9)^e@Ns?fT3h+fKoS7Cf%S!0d@+pKbO!e zE0OIQ8H2ebN3umI=A1Stai0>#0ScPrsa60!OG<|sejChB+ikSx9%?Tq^VFZ`L1;sH zZwV`yAC%)EzHjKpeakHCE`Cp;KCi3qq904ZK3tXT6$v~G_zE)YxR4?S2r(iE0wOFRyx3|u zE8xI!Pzi{gP)cBn5pP%OGS1+)iosx-QQuPh`z4r>U&5Vl-+v~ z+tU>hs4#IE2uvh(Lf{8n0i$V&oXCSM3kZd9`XfqrotnkL@w_PC&ev={+tF$cI}ACj z;y_w>m$2%~z)-#a#tfzO~jVW&gSi-1tW5MiOW*U`Sv) zW9WA;{1}~dZ%9{2=zxwNNH^Y={663wOQ;^$} z?j)11>tqSvAya}dF~bW z3IObY$o*ypQe%uU_c5{dl97iQlQLR$zZ|$1zkc@t*)(o@ue5|?Rx;U5y$v}2t#$xg=pQoK*=|eWbO}n^Y_R*yC;zR z^ap=T*FV1sT(C?Yo$|Xlk)Vh_&E4R0hA3T}h(UdfWv4~BHua2)(u0pMis=IH^o&;f zloBbVsc-&c6lT6T`G*OPAjeTA!S7I1sAU+QqfbPcT%Od!wg~GEd+c$(RoK^tKCDP` zcW{VQFUE7N{IC?y4d+FE1jPrGL>rbx0iF)PeSWiuH|BPn=#S!sZ|m`5d|+2^RS@Bs zqBJ}>=`qrQ)e=^|gs~3A%j6x=csNWrEavz2+eID65#@FDCs?WaQ%e2}$s`76cbwer z4K;_fi5u7dybA2a59$n=tGlQ{|L^0G=E7b~#O)s$lV^7PR{wCY+KspSpT80k@Hykc zF|2vwsSgKFL3PpTQB!}j-fzz_AXPi83Z#tE-h}6 z=BShA$lR{NRoH+IX~xtUpKw)_Yyzs2s@|QnsTx+IOj-^={4Jk>dk4+1?Wse4pckhL zZLFdWKQE?(okpBmRH1ItTXt6g&!5E3Kcvw{RBv^0YWyfn;9t-hQItc$P029yyq3T3 znAtOLT2zL!Yy2W_RnT5RuO<8(J%7JnnM(6CAlt{z6lhJIxc6ak2o?d;!mb{r0ELcK z;0ALI4#_f?IGUaKdT9dOIRdfP*O9!Jz=k z=rJwB$xa+{6!FZ#x)<*P>P3)Cpv@f4vJj=>puvVCsf*IMj1U;?Opc~5=HVIOGz`K) zEKWxcx+st9s3wQLI1NJpmmL`8DNsOt9<>xOS7mh!Y^o?FV?vX~Ty6@h&m9KdQ0!YQ3=Ee#Ki zwB!KRRN=bCq%wJfYpGX-S90j6wqsq;2206evNY*~@*#e23C`|3Mkgg?!hxpo55fH{ zPwUFF9b(70k{rzFSY7VF_)ag_Bik_7p4;_&^{~S-Q|M7z+EX~Que6pHCkh@O>@2^Q zg;wMfu4j0$j&MJ|)sb2XM7ykyDaA*Sumcu|k z&Q^a(Z~he}e@)5Tl(6dfTgtKjOiDz#NYWfks1GNOo!8U>Zn}*#T#gy+8^}uaBy}?p zJ75kkQQi_>?5REQ0-fF?B%lWt&ZllzI6qPDWB^L`TpORL;lvFf1k$xMKb`@0c$~v- zWxf%1E6c2<7A+7om~7)DhY#RP(iiw3KOYHV{3oUzT}7fn<5dibT^DgIlGpP;ck;-Y z(1GTnxr|7oVich*4Go&|7mocpB9RsRNJ}o`2xXD9rBQ@l<=@Zy2WC2Rf`H^7L_jX# zyXr3}Vbt~l<=&)Z7FNvQ^WR6q-{X#!kl-M;Odl%Zhv(0nLg|R29%N2B9_(fvf4wO3 z9In!T3sf78%@3FU=JJpFadtck2&))$qPp^g-;2`9LF3B=Jy_bUQuKQ ze2L65v!xnRg^?B`iagl2ObeW+f6TBL`Oy1h!~cQa`A16r4GDl5`v1tRb|2VC2GYWP z5qCtvn-3r5gUaN5M`(nC{Q;lkVY&Jl-ndUEM>sp`n|Nih|U*(~{^uXu-9sR2MM)kSWqW^qlhmbik;B9|1a}FFR8>o}) z+@2w^<1{$=j(iA%zet5)J9V-HK|VUjMExZ75R`n4l6NSPln_M5k$oob6qz#PWV}fv zQU6NG50UVHu0){ziSlMs#x?T2@YjX4N#-maJxamwS|?KE%BDpG2>j=c1~5fB?!;ew z=+A5F2^yCcojTbtdq~3^&-BTk!|5{q$#9;Q5Yfa-K$RBpH>#4kQqC7ErDt-LN~Kge sc)YlBxNKE+;Llhrb^mKQt<8 diff --git a/sjlee_backup/__pycache__/superpoint.cpython-38.pyc b/sjlee_backup/__pycache__/superpoint.cpython-38.pyc deleted file mode 100644 index 262ba3ef492129b9b831a3a778a2756ad6fd7a1b..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 6025 zcmZ`-&5s;M74NG4n4a0$+4XvC@7jseEC?9k?Aq&)kjTXOFoYmtmV5z2Leo3dJJUNo z-IMCxwKL2>lr^#uGK33X@?l5ffQ7_~3l}7gT=)a}1Of@|l_Q9l->dGK-NDeUdR_JE z>%CX6-bcSNKVQ-CocZ0wEw-v@f2GFp$3o*dl;m#!LK7_1daTJ9)p^L99A7=;J-w+j z%{PQ`j|o#)w~VIg8}y3OJzdzM42*?V1^siPidIQ{6zv+?3u5t>*|cBM#0jxXHgfY#A$H`D=mu^aaNo|?U8%DS@rqBym%Dvd2xPhG;889zZS6O zf?x5g2aCc9v@zdj%@h6!e*xbm|D?YpKC%MpSG1iv>Z_CbJ_k?yHLdZnLvX4!SY~W` zY3DjBJc{nKtP=GSS9&7YOSG-|n1d&s&wEMgT#uv2d*02Ux7%})-F{#CNfN}7lg5tp zd-0y{M7}4t-gWwM5T%JD85n30m^@S`_D-TC=K)e}hwbnk5JimEA+$3nsOxtPY;9aW zBvkIto z`*9p@2%G@BDGgdr`{$+gPTcd?TYI6mb@A!-zKnPLR+_9Q{%+LX*^S)L_t(LxFBL~? z{dY5aveacPkyHURW;3o;G_LUngGZ8}*AIOc$GdhVj9XrqJi9iTB8l~emDQe45?>N+ zpOZ3{l2*wo17F688wOoJGo=@`@!1YzFU`yw9bfv5GB|F5YPk6> z$@7?)kZ3f1fn9*=$d8~-TG?%p-cBBp97$72?+H|fJcR^tVNF>Fr;IpEad*Hkhg|tx z{VhJq&4Er*P!iB{rm(y&f*#II5B(_DHzSSv-G(6_!`eA_tEgqAAQC|fN~SnCVq*cr z-L5+o$jca<^8vlES)IvG0CPy35i`jQGlZq0nE4YbL$;&6WsLNZF)~LMSunxQY0K*R zkHDs~Zq|_3kM%5TIOzYvwE37FkVRtxOh2m8TxCX#WzCYPvcC7`OW5BcMr>8 zJJ(evQO;vlHrIyLbQ6!vRA%f2{*A00#j=OJVN^@7FSKUER*I3$9b-i{Xg=G@SSw?l zj6vZ&*?ve34~F zKO{MNlFtD6T9XM4fdgym!@1_Ca0#(=yhxxiIoFDV=V;P1&TD?S{kTHx1&J?mb@1Ud z)ds~fI15@wO`fa;+?sREivmBysMq7jzu>$c!;%b~S8yf?4k&01E~x#VkA(DZsOhhI zTR{}>1+C5+_Htf)bMw`&UT`)pUV3`X>7;2td1igxlQ)CCwOF>-y{%;Z(o>f<)-FEv zW^^nZW-@je|%}ak7q*vvq&}x`j?da%Sm<{D`vZcw@che3jNgRr7p4`a< zpD0WAP&7mKR7XjEiTzS-Upru9Y(3P-bo~U8i>s#w-BOq^+(g$xtAr!OQQoCqTbOT| zBR;Ve=q(Fa2HaH(uYDJ;!mSV%@yslCuE6RuW2*|QDhyG=_3@`jqxfu>tcRI8`(o!RJ!~FmVAI(t*d>&Ma1tQt^S$bY9?qjsj za5YX`e;pxuJpu#8=ah%H6xP)`voUo(7Tn4L-R#qg1O-Zx^>{%STX|1ZjXE6PI!I@k^_N zGhd25fva|O4IQ$TVdF6vM%p2>T{nn=)OAUvx4qpkg>j9xgLY=+-#k<_UiH89MrKGq z+|A0bcsDmyq?j2LF=iIogAJi(>|OGT?^gcZDB2rE`*P8~T(qAk+D{bi&7!?ow67KI zYg>}yTq1ZoK=|lpR^O98toQ;#K$zb!!8z~+OKM`yD(Ig1$!K?*gs5N9ESJM4f8{3`iv z(%}&=aJb*oHbTW?JqQM;Jv19}!n8d*5|(4yLnDy|P2000?PEPO682>}lG>G`t)q^( z0Mr<1b!UF3cCY|@!N=r`bOgJwW8YH&?+A8eXAw~kBA)MS->2;C1jQq;418=HEDg;& z+8v5L?6FB>Rc~(5%40L{7j|8ZsZ$PTq$h{Q9kx>!6+}45{%$kmhm;Q^qq~NfXH*jN zXldywWO${0u5boy#;M!dZFZY)>!L=Z(O()GdH)YF4}oN9l&`!1i(__8*oxn)d9=gg z|DS`L8~Y7Q`R4urm2M3~T85H7!MPL>Zd8 zi4C6dwTmi!;UUoR=EOKpSHl=v%f-5LVnreNGLyk>;{3W8=e$LiRICx8M4U$ z6MwJKN)A9yw#pCSbdg7Ja>Dz#U^&@}5YT6)yA&0&6b!tBd5T!MO10T^KqkQb8YL$b zyFEkz`3ogCiu)7+AI}NE_DETvw+P%;inJekkK}*fETqU)D4|ZKQPjzn-7lz81W_;w z#1(vUVSbisM<$FY{0&MWO6!60F%OYBJziQ=k$HB8O#Gi9GWYfA0h6l&RiD5|ovYlV znj|p0`)`4o?mn|I>DBarD3w$jpPuNI(AsRw5BRt^9%vPjIngR(I}G~rG+}stO#knM zeupu|32{x*LClabqbGC?6dBGD|HL_hDnP;Z5noUArGdOsT^sW4ppN47iOLWy8593Y zVg7)woE&C7!1$xo^J!Y(to#Z-@(luS5+MDRUnTH00$(T4B=8LaZxMK#z&8m{5_Bxh zDJAiCibz6U1Rnp0eA$EDkd!m1e;B!Q)CWO7GwA9`mwLWM;3I0x(Uz zwdv)FGC4(gmH=^pj%%JI4sxDvP)()&rQ*W=2$*COK-23;e<~<5P^XrIKP>0j$JnA? z;rv5Ouj@%Oz=g+))vbw%8Q)4xvSlE0VV1<2-$aG>Rf;UpwbYE9vQVoN@U8c8dmUr;tPEJX|v z^e@Y~834=TkJEl#SX(;{m%59Rm_E*u&tXGs|6}$318bCHm{ks!U(@v56 zxk0Zjze8BP?Wbz%%!+qYxG>qK85apG&H|-6_LcnHo+X(j7Xaul3!<>4CA(HyTv+Pb TdaeV7`2TJE)0$sI%thrtYcQK2 diff --git a/sjlee_backup/cats/__pycache__/cats.cpython-38.pyc b/sjlee_backup/cats/__pycache__/cats.cpython-38.pyc deleted file mode 100644 index 0ce884b212975ef258d28325de4f07f1147a1700..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 13727 zcmb7LX>1(ld7f)$XO_z)MN$-XSQ#IUw2e&K@+B*dEMKyn$g~sDwY!nCUhWLZrS{S{ zvyw)E-sslc@A-TDk=OOop{F%})E|3Y^Tz^r@wTgG=`eEk`s2tMPjlXboPGWTawgIo4>|k& z1IRg$=G=~)gZ?4p97=QUK+bLcVdNZ6bMEx@b6WBC4esG|QIE}v?-z~OR)OEC2647s zQBkc@M+vgRW)Pup@JvvNIx0AOe7dC?@sUomiEpJ|3xjf{>6fdm=A}SI=rwXQve{)(B(dksx{64UhBm1rUKj-HEJ)rCqJ)vDf; zK|5L~dhI}YVWrWo2j0=k3qjMvlIyj=_i9bAfkjD6uQlgk^J;#lQuk)-t?GraIGOYC zn^~xZ9{xoO0lNvjN)qRwPqdoNpc;8K^zJ?DRTVZJ9nZCt_te~6ty&A3(R2_U^Ma`A z;b1Pe)P?Yvx6ryAVE2zn|LnJ=YV)-wMs9Kd4A!WH^`PQozd76ySztQY5tdgxzcx1) zDD=Yc)@L3sKY#PfktE*P?~a z>}0jon2LgC*iyuWsi@tVs)b=E2&e9U`2IU3iYg5ZdiMkO9e?=26AwS|pohM_XOCRM zBHHQNWm!0cCe}1B`&O?k;Shj#UPaA!*nzAR8`<)XaNcqTM?B1U0Tm;(#TH!gw(c?Z zq2eVS$u6DL;u+YBz3gGM`AigUApYPLG{daWE=e(h4Zn2q%-%4HAaaB2XcBUnsYjXEt^?>j90t&wyl&}^en21A^LkEKbk{hjT_v$*y4IF-7R1bcK&rky0fO&llh`_XL0M)UJR4uTYed;K$Iq_oV_HJq zYi?IhVv9M9VfhY6?rJY-hW3E=VeKQ@Z-D&yeHl^$jKH0)@=$v}#>tJ&A%uqjd_B_E z^sWYI_{O?^5CiQ4KRU?9=5)&U{^QXe13V4(XEm_Du^Fhj*a|PI2pBVpuH--yuoubL zA!jXD{kcgF26m)r#`4i}vsIo~6~7#*PP2*>xo>&$5wMm@Jv=qpYqY@z%ZrTwC{My~ zsUX4s<6annm8e$3`)K8uH~R_+yV;xs7mg;kcM2XQbRDVMY$pO`lO2E>K)K*>OWrJa zO0(Kh3Me-TF_8-+;8Yzvs4g_1CNKvG4YN^e0#vRP2V&=PMUmf!vBg-NiCU_<5NFe6 z#ZISLBj+i{S!__Pc?mSq=+w(y=oa zRkyLWUXS(8&b=MqeH-7026C*L=~$j_8^+tVm9vKRVe63Pm}7d*Ea>VuXR;5Vcc@>N zg>It>(ua*8G(j?oC|vZeL9Dx)0dnY-Kf@87snzQV->)^>V6rp9CcVpGoV}g)RpoLs zIk1W+2Si~Y{3$tovflm?P0FFAhg89#2jJ(`sD?9H0xR!#1%~)eU3e#npeJ8F&kwNb zwds|k=cC|C^cpvK<(QWwu+J+WJ6?QMy4gBm2nz_jpw~@;Wv@aECc}4{QLRq)y#%fu z$Wm)f=>&P+Y^&4s!( zVXRP#zY8Ys5WX?WF6)S64_vOmOZTBmYl z_c~8q5zS@GvQM2l1I2!ytg$PjIC?p$R7_~2m{BAob&TL{ z0&*}lML?mog~ttUV>r*FFky5f>VCwl2-|)=r7*k)*?~r_zGOh!=z+Ot`No`ejjEsN zTd%vNjGyss=+^c%=#HfMIap z?FoDh3MrEDy3y6j&>F9Ttjih61!+sJjZ~JpCQ2L%ksJDS(SpzpD{r$Kxy&6p#_r14N28#*Osa=Fp+JFqdzalTx>2#q``Q4gbw*n0Y@ zvoFeisHd48Ot-5V)a$)16YPBorHT?-JpizP5E6MDeZrNTTd=mLD~0bxwLaFmf~&+) zw`BTR-vO=ye}Su%4!PKUECMGYPuQS>btc8zEaJT%^VzxJ?HnRo@Rs_v$C9!oRu6Car`xeM4bhu*t%mBjB%*5 z1$}v>zt6klIyGBvwBY(z5b5fR+Ie%uSTSjECfE++66Eg#`xhbK2@h!N8EB~nxDj>S z%z6g;aE8Vype4}QkCQXB6?-MS;&kn9b`977ZPggYF)c#7U2(haVwSjZT}ND7bGk05 z2)SKn75|^@)Znv0O@XwDR3)MeeE&6Htp41#N?fyYcqTn>RNQZbA`# z2^ly>asx#rH_pO(r5Tf)Ko*fkCkkR0c6&2yx57YuM5=|YxdjZ2pF=oi88?eF7M47I0Rd3pny_Q`kdvLnIo2wLFZaY%MH3lOO#s~BfRPCCwsAtnO2RA8f2 zT)zW<`W)JXG%PeDryIHfzL5Uibc`KoGob)Vyu3c5FHh`1pWXQw1xKHcod63EK4v9+ z47>?^>Vl!#U44xxtgjowLlS9i<8(DBKCrtHBySz^9F8@Jk+g&^Hq>K&uLJ@`>>VgW z);Wx~rRQF&cka(Yf5OXl?{6&!z6ynFmp%v3GQZ1AEZAO{$-A`t5o_RTL9o{HrcNwd zf47*tUS)n>1@ofD_jTsP4i<(>)>|AD)*5H$>cHHfsanVs+!TCP5tD6z*t%2;E~^gH z)N@pYph0jEpqSm{tBRaS`2ey0Osv0@XxqZm1c(fL8pSsdLJvT*sNvC>b0Br=@av^v zSYO_^6Qg<@*4@cYsXD0-0mSZEI^(86U_TCzy?`d4MG*bXAHJ!Sbf9t|`qaSI(k>*&w$eQM8C*5s9wR$0P&s&ALtjI_17-_>|0J028#-zYU(>GX z*NkiCHS1dDntd&M&6#t37xkTJ53CfOH9y5!j$p*mRm{V;Mm5pz##S?{rZD`y(Kr;3 z0bWjsLyoC@F9o97hx7#9XTr+!({Ek>-dETD`x{?A1?$2L+F^q1g6Ddyx3_uDXfjaS z5YHL$avrmJB~OxC)7G_LGA@Eeh&ZM;S&M=O97o3FJ-b=}xkqr}5W#2MKJ`s>C4$-p z!(2YT>q+++;-;HyvjfqFZkZ|%owi5#lf9|lu>x1$s<~>fTC2`#cGXzbRx$bg+G@tv zuDi(sih9u%9V(G=m(ci~<-}!_!el1G-!RnA;S=k%8*W?>p+=pUk|uz4&{nBumg(3H z8nZ#7>B<1?CmBz~sR-6h%BL3fErf6az_yGZZjmM;XRvW^zxCSKm!AI4@@F1}3V1i_ z758t7+t{f!=F4GiIf(Pr>3bHax{iv9XPRjCUBqJd1zZonZxbkW7t@*LK()en0GcAa z-<&pTtDT`1s6LM-j#Zs0sYPgv7KC?hz+5rW5TyiesjN! z#Os$Om~CQSUk+;X3sDGudzys>F-Df4y|!yt;GKLEnKJ)4y{R{8}1-1+e^ zcoQTiqGZQKVf=a45e|C-v9BW}92OkSrgSzA>xcD%F`*x}>>uRp5j|%VXmuQZcSOC# z+U7Pt!QLqh;7imaebKmTsK+C)75F}c*}#XsS<*y4F!O?|ny3ryPSTK=y4F0F)^pIq zEv5b1P+?hHh&?l{K82ChrwK%y?L!Q@YpxdJ#NfxOZZYtZ6S0#+P%oe+NE%*`*v0)I zD@SETbV;rJSZ{1m#AP=>kJ7(GNc|GD)W)IfvYqywLEVU5~EGFTEx>wF>m!T0s1zh4^{WKO0HB0_@r{A3iMaANu5W*1vaUdBF>Um=r=9;6A8Oyirepo$)VnBTy&D75je4-EP zh^3-fYytt(P3>LY{B3vIz?ldK zBhFtm4w&`ZoHp6`V0u0N)RjoV0SD*JGh{k1km<}+dwsHBI}M^$d=iPRX51p328Kv| zOLPpV=jw#ur@gK{slAe0;RTpg&R#qQOOtK`Yu;MPM9?&%Tr`l_nBB}(&9b` zMLn@b$!?J{kpjC-Q`WXrrk4^goqfSpGhI6B%{6CTZih)a=mpl}VxENar1qx%%FDnR z3n7DBXcLZ9jIuTeQDBdwO*gYYnL&Td9cA0*v!iU=>~@qDdUM@nyrFj~d$&G^-=*E2 zep!ZIfH!sZ@m)q9K`q>+V;6Aa{t(3&n9x3`@r&3^wE4pJ{oJL8{`@Y^bq%qkr&3Od z8?X?ljE5W^IRPFQqFH(ZYFPk`d(@%$)yaZ8I~-5N+vu$u{m4wOJW@Y=N*4=4(?3|%7Q;6g7Q%My3N81 zv@w4#-nnJ`aep6#r|nPp`|-{C2mFKh!c}U45dt?3$JP^%&4e4n7 zShRO>966)-?(;|d(d%GHQns`9#Fp9iQB|3g%8h%f-~F{yEL5F^i^oxFqS!QW2?1qZn+m4PRX3lo^T0-tDIGIb;F1q zKKp70^)Qi4eFZ|~@uQ?5ihCRO?DFlznq3uzp5UEPKa0wWN*Ad2O~$Vi{5pa7rZ??& zvGjh81+NfL%{c?dM8=OOGQW)ER#RD|rTQjv<2`;*ZTW#ptzNF>i_9PCEfMc2QWY_A zRp*|Q%gM8trP#KjmL}$Pmkmvb@i?3Is<_RuT|Z9y7dOdm^;u>(SJDST>X(sJcM-gg z1sOcNt3$VS~eBlt}AW4H0m^Z8%dPM zB`73EtmFC7bFp5IGpGc2hbtevzuRf##uMqKS(&eA@UX7|l#IH=T)+{-jA#L*Cuoh^OZXtzI)m1@PmomA>ySh_^;_XK|p z5ZgEdJhf4Oiv+GWPqW~+38cR%#=Zj(>kF}dIkx9j4UVd`sU()9=@uzUVu%NL9A!KQ zg^uwO&ujv@nlTIsJtBUdEnj_kxPO9alWjUKDY~!Y(g5b`NIrC?xp^H@nR~&=X#4RV@*&8#M?f+lhMjprEO1od*6v{PIuk-?3=gES%h~1kKi7%)NSW_Z z_<*WO585nV9M9G2P{c?&M?cc4R$D2Bc+){gJ^RABBln&tij0_%!$@TFVa$Ji?N7e@ z_J^MN%ER#T2qz->q_#zAM4k=e5(FkP9*g7o+LcnCdF4=5kr^C2n~Bp#q@a)rfrK1Pxr z=H^+6%uGbYdDO;=uizsdmZB->25zCMCG*yK3m$ejRfj*ykqK$>zS z<)*12%p%W`F$dUN-675!?;G&gf346RUK;{-)BZzxad_QY*RhU0D%2bQ{zfLywGS8U*7!VH)2ny4j$7h6LTjNLiBtd3wZhKWyw zj37B|CwUy%Y$ht(XfHV_Ex^ifSCQKi*U#W}L8sAPQh$X$N+VML5samtNY{!{CPH;an?L?E5#|+6^i3==S zA^0Kzw`a@#$|j3Ea~{EHcp-NRdz@T-@i`!z?qHmJYTy`o11XnmbZn1vS8o5v$jG6Q K;>dj?kNh9Gz}&q6 diff --git a/sjlee_backup/cats/__pycache__/cats.cpython-39.pyc b/sjlee_backup/cats/__pycache__/cats.cpython-39.pyc deleted file mode 100644 index 57119681286e7fb441f6ef1eb7d132ef77bbde33..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 13671 zcmb7LX>1(ld7gV`mdhnYQ4)1n86SD+jsz(sCNrCoHg8m2!Bn1kzP1^z5COu%VX@H{nQ8Z1gKF>R| z%R{u2F7>_h-RFCMpQ^*dc@4jM>D@El6PorfOlqr|7 zqiL2+oq1-%YT9MH>69Hw>y2DFCvDuOSN5{JeA6>DU26`N3t7%^xhS0CEmw zId>xGU~mXIhq9czkaI_H7&(WtoVx@4oL0JXoqIT4(i5{91SKPJR2Xz?VUp`qRa~z& zP=c(e6~-tWJ`+~st_shdm~N|Pa-`d8;ahFgqp(tK1(jO6btzPFrCIHC>aDp-+^)?!)g__B5RI4_^%FJTrLKt>xR$5$bIUH?IpU{@Ap0=uRzmi>jr=e+2W14=a8qY`m(Q|RTHeZe6dd+`2?8NgW zzY{7ysx~`~&_8;4K5Y3|a-$vwe!b;4u_$Tjw`YB9UOnhm8~#kAUAqvKCi6ai=jZE@ zkALxe$ZkTvn#MWkQ|(qOti^sEz5CDmHHA$_$FptaKQlX9uhqj=JRQcz{4lQhIGD?A zbs;+D&$lm!*!^SDKl^Q~`dqz*ky{)9gEi|>BdiA4Zw_}v7MKlogymH~sL#%Z3jK4m zCa2HCYIiXitk91_UnV8lT%Z3^?1HJGg)gl zr{b^`wH0w;D(-Zr>QU4Uqp1fTec*11qDm8k-uvMFCmwz1kw+hV$VcD)^G7aW5uI%9 zvMd}z3u~H_eQQ(~aR|UWzpCcC>_AqEjcofzIBz+FBR*z4kBTwcVhgVLoA(&|Q1#P} zWS7`mHwtC2BT-jfs$aqea?MBjN7NrIUA%IEk?UYNnj*NVs{%eHJF%mAY*J#if_gLY zTH)nN*zClMNq#0&t#-H3s8_d736&9|H3RP9ouI&Gl6Rz(Yf$ zX}V_|)UJ|V?VhzModZ3yA5d%HOn~Nlxiq$hUQlMQfoIS(reOyZ^eAVme zX>1{nF|5Gl$UW^P&46IA@OKp=;+%F|i~jsR^uoUnAq3dcUiM>wDP|YS*bWz1Xpxx+7&Nl^n zkW5@M+Da{$o#bHPPMT&c9j&z5l{r-nDzWOeYDke8S0*0=r>QogQuj*J&gT9aVP@#NM{!Lo$RV^yE&#vpWZ2M`Ac84PdH zp8<<$)!Iq{11BLh@=*-@YJep*ge24i<^UODHtQ{b>Xp(^;$E&Qvi~Tt7)$K9t!nd0 zE?ZXOc3X8aqDqp(2Gv`aKr+p4qoVesFLe+gu^_f8iBoHU7gtJd;-nOUdRS~?VtI*P zP4pRc2W#t%MDK3j+r2w);vc@ z>X&4pTj+zd;vfipkfdVD8og%_1FzaZ4!!ypIHEK4Mk5si^;QQw_q=dT|1$V!f2RXg zz1&IN>87hcyL6Ma%kBhRdMJc*n2Im<4hL8*#}*LKfYBL{z)R}$yZLn**sy$4#>Z--%W~TuRj?|6q`;$%J{8&!O3wVKz)dWR|Qok(<4fk&HC){ zq(~>mNFrRv4o5Q>0XvsA0KK$=s)8~YUaDhZDTRH%=GAO8P;9f?^lRO?J)6{`IPXJY%IL<_1BllUw*2}!Vssy}Lyby)*?{EHLvz6jj9KeC zH9<45u6boUumcCGwsRdEzMKQaGbF9fvwziy1qSHL?C&=-JoY<(GZ9I7_{^(51W?7`gt%#6q}YT^)7tWV+5S4dN*?ob(-;= z1Y-{J0kR`ds1Q;0;=KEROGBJ>Rs+HT`%Ntm`<<^;n(d$qlcSOpDwT^+(9;t2D7r|j zXP-Iy0of1rEYri;cGbd0qrYW>z0aUjQ9`Q+0p<}RB9E(2c#?A)*7o(L=$)uGz*r}MPWO$oHy#Hf9I~Tm2MPw7+5!73 zoa^tvYe2dl^hds{56DBwFRa)x@#1T zy-;n7`qJ27pLfP}D(C}E_wz{f^abr@bJE%{|4bWVTQ5@3(G~H#d=Pl%j8#i>swNM#MdSW(c{iMZ=;-nw;b9pM%f z(U*{cVen?I%ZyXR>U)r^CZhdOE z6`6NNDZ=J&I_f!OspkpaLqKGZ-ql&g1WN>Lko5~FSVG9ix>!)mxs9xe6#PX*=>b`T zswjZ?N1=pZ&$^pP9S1C=o)c1cc{It$3*ANxYZ^rIBs~UiNWa?X=|0iZKJ5R<@Hn&( z{qG~PLG-Z@Jn+x%<$6w_-*AN7cgnSTZeYM3I}_VI_p07eQ9DFs_e|OC?p0v>k*K<@GL9kS>Qw9 zNf!*&>FKLPSAES8evwLM2PdgP$$`C%AbD$$-SDYFWMn1uuAv_D`y~(_V%tFRvCd(< zO+EKpy>)*s`V+pjbAPLO@K7jMJM=k(mW3T=V!_VBOx~vD_gMp13*xkvHuPWF`g_H^ z^{aDpDx4FQeV`wwHm@*Oo@8lQ*l3cQZ2(KdmTDtY@KA79#Y}bp66;buysWxRQ>Rf? zf+oR5fKqOQhbnR*6#yjqbBX>^s#y#F5+L&KSrp$yh^x(QGIFO zc8uzCQ*S%#q?)AO3y^qc>4%#JG5sL?^a7gvB0@xzbb;UbTg6?9xwrAuKBJ2!Q99tM zMd6=r;EAe4PdZek%#&^lA9Zk4Bgvlg6iS$XGgMDx2h>*UUq@f-6 zFq>BjB&StvP5YE_5$r+4F15yb95&$~GA8faQPa;qhI@t>uHw$Aub``MBWOz}c0A<) zBiwL>Z8R2*=eB9`z-eDZKMWw#ylWZmycKiBS+Q2!mE4N4qOD-!`?VE2&~A9??a*hM5YMO4T`6x^i_n2`0H54_YBI5H{W<|{Ikz~W9jD~hhldxN=o}TglOW{n{$<@ zz7!?}YU+KvQhgj16;CkL*L#R1-ix>cfHx*o>K>-;rBJn_WC;2nT-==KmCcDo(e2AF zi$%3}8+9m)RopS<&xIGe6v@>_VuM=e!o*2$NyIWx)EwbJCuey;M&cdIB8)LHnlFd- zx%oH(XFbirqL>(q&`CS>EATmf3Yjwh#1436v*Q8XV5Of0z{MW7pVvKd21@o=)WDx% z9pR-^7r%s%@>1|FhjQ3BtRL2k#)N*@a=x2)#`L^Vq%Cpy?NReK8=Bkv6niJzhlfy) z^#$Xqp`MJvMBvd7)&ej2MoA0#z{RevTBr-1P12B!y4E_D)pOCqZKZ?SP)u1{NZX!P zpTNlKlLR8T_8|t1G+&Q!V(?DYHkobdi8$#Ys25Qa1PotB;^7jImE(#cx@5L{qBl3G z*Rq?xj?%wJ$h-}-zxG0_Dd3u6>F&4KyUho`WdmvXdI++CpFxC5P5@$EhSEas&9WIA zFvqk7ShG-bz{3|De%B!S7ierD-p^gLXGL$nVZf-faDf0*3~CY7h*faczy$}jpaM}X z$|DTH*8|mOP)7C0q-*!~I=!rHwZQv<)D)Ag7Gd;Z%v*hgfb2B2(OzNfWq`g0Vb>aM zcs_;nOkCTjUqB)6XghEQrC3k-s;&@-@<1|Dze;cn04n!sd=iVk`c#D=-z$3q`rUp| zR3ZKuAsPb^w~+CWUNA;sgt_{-v2@4Iht*ddhSW#VO#L{)#|EH*SSENy(-$xe_2xNN z*=C{NlHZ3v>b)BHb;~C7^<I3EwCZp@y2)wrr(r5;jXNoqyH1hZ3_FC{d>V0sB@ zd(-DjdO6=wA!ZIIV{itBmAC+ia_gjXNp6vf#7izfgg5as)?##tfd}3?Be5ux)vvIF zLj)p*2C^t|AH%)OOi(@bzbsq=?4^$=j^B4Y zae47N10qfenV?NYZ=RRyB3PqjSV1(_(MIm~6c?_T|i~FofW@W%P2Pwb5;~lk@D`Xp}i( z@?hDHZUNF!OjBa%o*!h#ws3CR0ovQyN+U**Y*);S&sjH9fF!C5`;hG$~ zfXnoUDZjvj_CZ@;z;2?=Z*ASr9eNnd@8WFF5My~NW0bhiih#;^rs0!Y@KOx!75EOr z0QX#>pAkVMO2<-8aQBA-u`5cu)gJ&a$~IpX9TyQ$HfLJL6HA4SZaD`hW2fD22s$SY z)Wfh9sNY5l^+y1Sxm53{KVFXN2 zqV$91uA(Pgjo?KI4R5H3bUwgWPcwUs zePMfE{RH|{KM4R#8rXg6H<>H$bN&UnUp$LhiXkhSXlhIM*wBRNPI6hVikqA` zjT1C>aT(lEzr+mpO7^Bl{Tz~_^UG~|V&f@YHBQ{ctUTjX8}r@1c#-#i%ua+we3dcb z6SQfkxZUqWM007=Is&og4q_gvdG#=x@RcOKqyB(jNH)L+si)iV+4 zrx2krMn5y1V-&jbGR(r8P1v{)JW$p6R0?9)fr*yYgDAo+So(D8I}e{bgBOn{{j<0q z=Yt{teKUONc5f9|zoCCFY{K5e+srd~VCWw`i}#3O>{lC;rStH!yo3oPIliaEp+WDx z6Y0!+3a@nVM5x>BZy(Vdx0jF+iH>(eFC=;;u~7*g4NpFJN4MRMO(wG2t_q*M;DKHf zh$OFM-^XpG6Gp?v@gf~gfjeaD-l~Nd;5D+d{*Ua9JX*Gr=a+iUUbcAQD4Gkj6mFnw zE-(GCp|BpvvJi05h#avvV9&swrIy3{Ts)qGMeFulY6rX?g5%#pI@bgHllo3ppRIF{ z1#FFLN4Oc-+D2sql?EL|k{2ZFBwBo591Z)DWpA%T0z(=7NM0_ksxv2Oq*`h22aPMkSahc7B? zDv3pDx=CJ=1d<^hM+MJ1p-sHRGn+uJW{g4>kBKX1(^p>_9h_j+WQ#USIv!}Vv~l^w zkq?$>SKfdZm>!8Nk3NTf9(=R(&R#HVZ9g70J`9=m7)S=husu(R1-=Mew;fEMSVGW@ z&`{=U+T`3tL*=oo2wl$&RzaxxVRREMzQffJ=<3c7(?s9M>)ecpnHT~3u^OVgJ; znI6}>WlEHDWRF76_o4pn^o&OY@C}htv&S|Yv__b266|is(Hj&gA+Ek;sQ;mzK z;ymoAe8v%Sk{ZfVuB5yyHG)|bI5Oq{%c?iRdE;RM9__Cdd!wr(z;0T4NH2`8S!+7h zv1@q@eeCM(>Wx9cO1aMNc&xV@7t6bQyKzj;_G1#JB5pHS3;%|_Rewo913>*Lfv||b zX6#Q0K0q)y*i;hO@>o^<2aEoS;F|=(erP1C&k&Fgm&Q{(aANR_FixDP-Bs|5)>z>i z1WN?30hB!w0o8-3?8S5GbF7HAhEh?vI4Bg=s9YS>kaX&y_!hcJOq9qP$^{kT5iA_+ z>_V+`pnvB;79ZAcBP6kvt`H%QqKc-E@~X|SV0T!s(3MaJQlXhqZB-GRjyg z2-*$|jFe&PBA4)d<2erbJi&Vi9w87$E!!o#C!75&3(gXVUMFC~G@+Yrk~5=%&#UeJ zpQ9W9Q}pw0e#|YnreZzNor@3tR*CW zyxp3_BQof)iIaZIyZYK--^CM&_XiG}b|5;QAW`~!`7TcAZ#h+KhBoY8rcV$&z)UCA zALSK7`pDq|%a#eKdCK-|+F#jZk!S8>7!6O`PGOJJ`z*R=pv8f4^4)-I6bz(1veEHf T?mhYaV`F27#!6%Nk3IH(ScBa- diff --git a/sjlee_backup/cats/__pycache__/mod.cpython-38.pyc b/sjlee_backup/cats/__pycache__/mod.cpython-38.pyc deleted file mode 100644 index 23b810b5bd522f9212fdb58974ba1473644c2080..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 7573 zcmb7JOKjZM73Cv29L`6wEKB~ycBYQ=nbwhP+4|ihi=vWYc41< zGcFH+b0Fi4fHNT{!I{iBqu?Brhrl_MarVhm@~}Jt&13SYd<2^JL*|$~4w>Vb%s4nF zW||~2V_Nl4U!X5T-U0PZ(wBFTva6YJFaTD^G$!l0B1vhT=w$RnjLgs!a0jG zya^Ip>e)zJ7m>E4$6736JvL%9&PgHlwLDskSoEmXlm>X_n%*<6YY%+`l0DJWuVZ4- zx!l~1YAz|=@>SqBov^XuCsr7#hV+ws$Ca`X)Dv4anr_r+2Z=e~aKrj1zog?27axcu z-v~NQ*Ylx64}zq4-T$!b2T{XqCe~~_xIH72(j_4>UCa)@N6z7uzj=+PmF!=g+(_-BIl& z-;2WO(C-HIrEcK3!Sq5Sa51WG$5-`c*PodVmzuus%)8#LZfDwaA>L}ssm|TR^w8Tf z^oFNEG-2Xb7P^Mt7p88D@_$Whi1WER2sw||uz%1!^_EwUTCpx72Bf8q)iIr=> zk&e!$SEdW*5m-b|TQ_@f7KgOhl=@+9LX*ZL8f<3QH!k2SS&RNqUB)iDw!LvdLl0q_ zvDwqtEv!0S^D*A+>8n=OW^fyNMr~3j+0}e(u{>LM&%`@~8T@+Ysukz=Y|BHZ9YbP) zWOW+*svYN+3O^R=a%`^`)92VetgRN}0^7SHm@@4Cfg3tr-uMDXwUn4_|GB8Gyv3xr z;;VM(G#j`4BnNo8QIhkT?EntK1qAATV%~1}cM?na9UqMb%nAM=biA%~VNamdVq&!1 zP9iQPVm1+167g=52ctt?MPFV{^t-UUI=oF{5`ed(`^j!V7VkD2p$~JG4nfFQk(23g zqBaBnBnA-@`ZZ%r6!oj(s(wtQzX{kb{nUB1dL**-APb};d&Y0b(($)9+3TIW&UpEje6jE zjdtZ)+wn#bFL)5e#8g zdStHW5HjgL4>O|oaxfR1c+=d}*_XFt8zy4TvCka1=c=WUxoAlt>bt(A2qZ?0dbiyT z)#GSW6%sJ!TditAeG~V!!UBwSJ`r9b787x&R_aT%y3N|SH$apV`9X-s zZ7wvw&{lU`C0~L1kOG4y#*7oo&8NyyM8WhxMdYRIh;~BiSN*ezVzMXztVn5}^RZ0C^5KAq8w(m{!?fo`=Vwqrcwa z*TLS?)bcVXE#z7QW3!Wy%jryBouMQ`J&X6hDy9{6v1+L2aZy)^@E~8LJ01&y~=IoE9{9&}o)Ob*XMhASiooM2pr^cF1x zY|up0*~dl>*r4aVv2hBmeQdA`#+1PYbMgciY_bc^Q5Z`mzSD^sp4+^Pxba#THCisY zygtgH2i`jMD#Q!_2r}&JB4gczPhmGN&3X&ttA&2)C-2l6XP|;@-UCP~T>I1ZeV<4Is)hEP67N3uamE*gu8vYumxaBJZLBfoCfhI+2+HDEMBi zFJKS#K%ZkHsM>6m>PhrTeV>R!MBOBu7!9x*Yh-XIynP;Obu+iCb>RohZU}cJCOMREf_lVR$s>Rf@8pPm9 zKS*~EMFC#LI1r;SnEr2N>Pt=S-WIf3VK|Qu&>jmX*$ehoh)CEy z5ytyJq*ol5wxT`_r&bDThWAYx2Sc8RO+Z%*L0nRMCoD_id(_d~l8O{}1 z#}>5%Fw^{pbY@^KDB|p>uh1@Ymc(ckQC~Sj-U8=PeWhoCGpMZqVQXSgS=lLz`b#R) zud875tmsP!!DU?32obVrwGSjQ(-H`;e^RF6$EFh{|F*sSq0a|fhDDWbl77CZ78oaa=X8iyB? zmr6Xw`n1-Zs7LtZz5GQ-E zL&hr&Ys(gvvOjeKq#vj_Mv*T-K?fP%ne=itz09PSbLr*0B8;N`Lxt2V=Sz*9zJw|d zQor5o6p~RNPO#YPJ!cUR8V-#uWlMA1#IVydku$ZGwBc&|3!k}qG$%RE~qhcfN zl)%HzMkiS=ERsCRjCeNj4wn@nlNJ{DYNNs;RZ9AWMT-lG+EQM9in5~3Pj)O0p{yv) zxB%Cj<6>-jWy;0aLJ#HFsEFb#3b9hZr$3@e>#(-h8yB=K+GsyTT6b#WvhFV0w!MMU zZehIyE2e*mXvBpGMcAb>G}<5OWL2_oD~D3)kD`&K(WQO(gQXygvb1Kb7K2gZtuqhv zSd5DZh2@p|Tv9_7rc-f)3O1c}e>$z!RGwB_qwq)NC2B5B)9zisoSK?Rr1S{(;)=E!N|_}9gO z<=6l6XFC3U5q11z1Xa{J_G)eX(WAoZ7?-rtIB*K@rB$pa=9!PUuoSv|8ejQZq0D^$k)-4qQD= zWQxccB4>%rfYfrgxg=~ME$VcmZILK3Y4(2)xv&DFSz}vjy%yDa%{eirkLicSggz?N zJ;)#lBZrcfE7&^{xr|89zLC=#;_M|i$@7%hTvVM3f^@jKpxf%)Rqx})WHfDZyHUHY zT&XBE;7v8}`P4CbJWqnwn_PY48d+M2q8%4!s+^v&zvb}9PJ+9?Wd=4KQ(|98>v%8G qHlK@X_>&65FWbdY+bo*) 0.0 and scale_by_keep: - random_tensor.div_(keep_prob) - return x * random_tensor - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * l - 1, 2 * u - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - # type: (Tensor, float, float, float, float) -> Tensor - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - Args: - tensor: an n-dimensional `torch.Tensor` - mean: the mean of the normal distribution - std: the standard deviation of the normal distribution - a: the minimum cutoff value - b: the maximum cutoff value - Examples: - >>> w = torch.empty(3, 5) - >>> nn.init.trunc_normal_(w) - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) - -# ================= timm functions END================= # - - - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class MultiscaleBlock(nn.Module): - - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - self.attn_multiscale = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm1 = norm_layer(dim) - self.norm2 = norm_layer(dim) - self.norm3 = norm_layer(dim) - self.norm4 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - self.mlp2 = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - ''' - Multi-level aggregation - ''' - B, N, H, W = x.shape - if N == 1: - x = x.flatten(0, 1) - a = self.norm1(x) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x.view(B, N, H, W) - x = x.flatten(0, 1) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp2(self.norm4(x))) - x = x.view(B, N, H, W).transpose(1, 2).flatten(0, 1) - x = x + self.drop_path(self.attn_multiscale(self.norm3(x))) - x = x.view(B, H, N, W).transpose(1, 2).flatten(0, 1) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.view(B, N, H, W) - return x - - -class TransformerAggregator(nn.Module): - def __init__(self, num_hyperpixel, img_size=224, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4., qkv_bias=True, qk_scale=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=None): - super().__init__() - self.img_size = img_size - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.pos_embed_x = nn.Parameter(torch.zeros(1, num_hyperpixel, 1, img_size, embed_dim // 2)) - self.pos_embed_y = nn.Parameter(torch.zeros(1, num_hyperpixel, img_size, 1, embed_dim // 2)) - self.pos_drop = nn.Dropout(p=drop_rate) - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule - self.blocks = nn.Sequential(*[ - MultiscaleBlock( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(depth)]) - - self.proj = nn.Linear(embed_dim, img_size ** 2) - self.norm = norm_layer(embed_dim) - - trunc_normal_(self.pos_embed_x, std=.02) - trunc_normal_(self.pos_embed_y, std=.02) - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def forward(self, corr): - B = corr.shape[0] - x = corr.clone() - - pos_embed = torch.cat((self.pos_embed_x.repeat(1, 1, self.img_size, 1, 1), self.pos_embed_y.repeat(1, 1, 1, self.img_size, 1)), dim=4) - pos_embed = pos_embed.flatten(2, 3) - - x = x.transpose(-1, -2) + pos_embed - x = self.proj(self.blocks(x)).transpose(-1, -2) + corr # swapping the axis for swapping self-attention. - - x = x + pos_embed - x = self.proj(self.blocks(x)) + corr - - return x.mean(1) - - -class FeatureExtractionHyperPixel(nn.Module): - def __init__(self, hyperpixel_ids, feature_size, freeze=True): - super().__init__() - self.backbone = resnet.resnet101(pretrained=True) - self.feature_size = feature_size - if freeze: - for param in self.backbone.parameters(): - param.requires_grad = False - nbottlenecks = [3, 4, 23, 3] - self.bottleneck_ids = reduce(add, list(map(lambda x: list(range(x)), nbottlenecks))) - self.layer_ids = reduce(add, [[i + 1] * x for i, x in enumerate(nbottlenecks)]) - self.hyperpixel_ids = hyperpixel_ids - - - def forward(self, img): - r"""Extract desired a list of intermediate features""" - - feats = [] - - # Layer 0 - feat = self.backbone.conv1.forward(img) - feat = self.backbone.bn1.forward(feat) - feat = self.backbone.relu.forward(feat) - feat = self.backbone.maxpool.forward(feat) - if 0 in self.hyperpixel_ids: - feats.append(feat.clone()) - - # Layer 1-4 - for hid, (bid, lid) in enumerate(zip(self.bottleneck_ids, self.layer_ids)): - res = feat - feat = self.backbone.__getattr__('layer%d' % lid)[bid].conv1.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].bn1.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].relu.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].conv2.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].bn2.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].relu.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].conv3.forward(feat) - feat = self.backbone.__getattr__('layer%d' % lid)[bid].bn3.forward(feat) - - if bid == 0: - res = self.backbone.__getattr__('layer%d' % lid)[bid].downsample.forward(res) - - feat += res - - if hid + 1 in self.hyperpixel_ids: - feats.append(feat.clone()) - #if hid + 1 == max(self.hyperpixel_ids): - # break - feat = self.backbone.__getattr__('layer%d' % lid)[bid].relu.forward(feat) - - # Up-sample & concatenate features to construct a hyperimage - - """ - for idx, feat in enumerate(feats): - feats[idx] = F.interpolate(feat, self.feature_size, None, 'bilinear', True) - """ - - return feats - - -class CATs(nn.Module): - def __init__(self, - feature_size=16, - feature_proj_dim=128, - depth=4, - num_heads=6, - mlp_ratio=4, - hyperpixel_ids=[0,8,20,21,26,28,29,30], - freeze=True): - super().__init__() - self.feature_size = feature_size - self.feature_proj_dim = feature_proj_dim - self.decoder_embed_dim = self.feature_size ** 2 + self.feature_proj_dim - - channels = [64] + [256] * 3 + [512] * 4 + [1024] * 23 + [2048] * 3 - - self.feature_extraction = FeatureExtractionHyperPixel(hyperpixel_ids, feature_size, freeze) - self.proj = nn.ModuleList([ - nn.Linear(channels[i], self.feature_proj_dim) for i in hyperpixel_ids - ]) - - self.decoder = TransformerAggregator( - img_size=self.feature_size, embed_dim=self.decoder_embed_dim, depth=depth, num_heads=num_heads, - mlp_ratio=mlp_ratio, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6), - num_hyperpixel=len(hyperpixel_ids)) - - self.l2norm = FeatureL2Norm() - - self.x_normal = np.linspace(-1,1,self.feature_size) - self.x_normal = nn.Parameter(torch.tensor(self.x_normal, dtype=torch.float, requires_grad=False)) - self.y_normal = np.linspace(-1,1,self.feature_size) - self.y_normal = nn.Parameter(torch.tensor(self.y_normal, dtype=torch.float, requires_grad=False)) - - def softmax_with_temperature(self, x, beta, d = 1): - r'''SFNet: Learning Object-aware Semantic Flow (Lee et al.)''' - M, _ = x.max(dim=d, keepdim=True) - x = x - M # subtract maximum value for stability - exp_x = torch.exp(x/beta) - exp_x_sum = exp_x.sum(dim=d, keepdim=True) - return exp_x / exp_x_sum - - def soft_argmax(self, corr, beta=0.02): - r'''SFNet: Learning Object-aware Semantic Flow (Lee et al.)''' - b,_,h,w = corr.size() - - corr = self.softmax_with_temperature(corr, beta=beta, d=1) - corr = corr.view(-1,h,w,h,w) # (target hxw) x (source hxw) - - grid_x = corr.sum(dim=1, keepdim=False) # marginalize to x-coord. - x_normal = self.x_normal.expand(b,w) - x_normal = x_normal.view(b,w,1,1) - grid_x = (grid_x*x_normal).sum(dim=1, keepdim=True) # b x 1 x h x w - - grid_y = corr.sum(dim=2, keepdim=False) # marginalize to y-coord. - y_normal = self.y_normal.expand(b,h) - y_normal = y_normal.view(b,h,1,1) - grid_y = (grid_y*y_normal).sum(dim=1, keepdim=True) # b x 1 x h x w - return grid_x, grid_y - - def mutual_nn_filter(self, correlation_matrix): - r"""Mutual nearest neighbor filtering (Rocco et al. NeurIPS'18)""" - corr_src_max = torch.max(correlation_matrix, dim=3, keepdim=True)[0] - corr_trg_max = torch.max(correlation_matrix, dim=2, keepdim=True)[0] - corr_src_max[corr_src_max == 0] += 1e-30 - corr_trg_max[corr_trg_max == 0] += 1e-30 - - corr_src = correlation_matrix / corr_src_max - corr_trg = correlation_matrix / corr_trg_max - - return correlation_matrix * (corr_src * corr_trg) - - def corr(self, src, trg): - return src.flatten(2).transpose(-1, -2) @ trg.flatten(2) - - def forward(self, target, source): - B, _, H, W = target.size() - - src_feats = self.feature_extraction(source) - tgt_feats = self.feature_extraction(target) - - corrs = [] - src_feats_proj = [] - tgt_feats_proj = [] - for i, (src, tgt) in enumerate(zip(src_feats, tgt_feats)): - corr = self.corr(self.l2norm(src), self.l2norm(tgt)) - corrs.append(corr) - src_feats_proj.append(self.proj[i](src.flatten(2).transpose(-1, -2))) - tgt_feats_proj.append(self.proj[i](tgt.flatten(2).transpose(-1, -2))) - - src_feats = torch.stack(src_feats_proj, dim=1) - tgt_feats = torch.stack(tgt_feats_proj, dim=1) - corr = torch.stack(corrs, dim=1) - - corr = self.mutual_nn_filter(corr) - - refined_corr = self.decoder(corr, src_feats, tgt_feats) - - grid_x, grid_y = self.soft_argmax(refined_corr.view(B, -1, self.feature_size, self.feature_size)) - - flow = torch.cat((grid_x, grid_y), dim=1) - flow = unnormalise_and_convert_mapping_to_flow(flow) - - return flow diff --git a/sjlee_backup/cats/feature_backbones/__pycache__/resnet.cpython-38.pyc b/sjlee_backup/cats/feature_backbones/__pycache__/resnet.cpython-38.pyc deleted file mode 100644 index 29a0c6e1e376f4fd95eb8b16b6c04938e676742b..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 11196 zcmeHNOOxEzb;hgF-RPbNheL{#C<>xwN-c99{UV1HrH~X&g^o1El4whwlmlVwhD zm=Uw$keCyP#k@Eoj*8d*Ilq<{OX9eA{jR!ZiDmJII03Hki6Y(G)mqU21<;yWJa74Kf0@|&5E)(EixGAkf?}vg$Zs>RK7R2n*a%aZa2O1#qv46|o?S;Esv&Vo^+iJ1*2Ol+uL*TEW$l8XxQS zT;T+v8+wlDG(x8-dt1&xc2Sq(I`FaixfiT@VVsj*;Co?tWyn=)L$1CY=hNEd@`$Zh z;u$X84(rQKwX$6k@oZX9UM^F~iil@!cZBC8z0s@^&rTFjv&zo-b{O`93rkC)ciX3V zE%xt(J=th4_GD{mt0%l}u$10^Wue|EFQ07)Z?PY?U#4rdwot8Bo9Dbn?Pa^xmlw+C z8!KM5evZ3dc!>!wmlx`yx?U42&6n<5uPm&WD`gLQ!hJ8j@O{T3Svc1a&02jWnfc|H z7-1yCg_U);ezw`HzxX63WK2R|IA3qN4Y!s&#tThiVg%Nbg=)p~%H{f5nnWotGf;AA z63G(D5y=xFpCJoGibSSB4&XhM2K5;Jh^K}h;|N{{36%{sQX*BTkro+|Ex3m?z=*C{MLbv?$=6YT-fxa?{%G^v*X2P5K1V1oK0KB zW=r-4{UA2hJ8lr?M5pW0P!F_N^ZnTT7^7E2DHG>AzSHlzz8Az+ZxE)uJcOIb!$gjN zJW%8tIQ_w0YWKFhrN*tUyS{vWsV{pQULy>a0&n2AHU_@q`b*7@?_yGez9(DVfmc}y zHoBhYth?qIuJE!+}i7)_b^hYHhH?I7`os=Rbk_1+=1yR#Y{G|3An#aowN= z82*SSf$+l4gPB}dpY5?)-zrU&;=|qiT83f- z{I&V5Vkm1_%-V{rD{jzfT^vzB9|4oTm7!LdTv?#&%AfXaM z%l5Q=?LLB2loQHkPM!$$4P!SKWrVt^2WKKvXq&pMg&CpmslQR9EXoaeH8h1m*I_my z9o6^rNO5veF4Fcerjy4g7ES#gT@}zuLyZQ$IBGKm%~qW0h!!ZV{$vy z@zdEozkA1q()4>W4D3XAKP=m>FYI+U@En+D*#7WrZRfqKsV3VW*=5`Bg~MK+usgmz z9`fb`HO_W?=yt;^&BhtlNck3S9OoRT<99;Gk+0EZ;ffnJ+N(XeRS~iE3m01ZQz?5g zKD^osuhUv@d4A{#E@TyqGo+>ESYP+cvGKY0>CaMo?XuSR;x zK_gO5A93(wN!dmQnwFP8LyrviMe9gAr|=lVp86Nc19i1z$`bA*-zRd0$O4f?A~wha zLoVSItJ_#U0XcNSJFyN=l`O$%9pY@WCvUq_e2CTonX#f8nuRB~i1Lxcd1r2771A!4 z1G$3<-At|@uOc&%H)XSdgTe@8Sr^CFN8@H8mJ5{eGC9W5s=G|6;L!0s3VB0tEVq ze<{Gf^un;~`Ceo5fP|muY_b+)qK|kNieKc=*t`4-nCE$yFav*J3iSJ%qzS3 z(1a(+vyaHaN95U$uqLLcV2+d`o&werFn9XolT9 zUCpw;ImG^Ej{S|X+QLNfyK(8JBzIjpEZ@ONl8PtdAr~M|gT#8r5801TXe0%k35~o& z?b!IMWaL#zmY1llO&o>IlEoS;$>+=Ruryg(vT|7=LSa=_LE;SiHpz-GQfHpqO;j^l z7@GAwVxgtZ@i7a-W}_a7@CA(e?=a&3)u{J}hD%00G=6i)ZDiTx+c0?q47h=3n7l;Z zrId0LOZ=G8Uqik8gosV#91&iY@!EWVG7m@M@+QsvNn;-3u|7l4EdGLE!+qH4bTz$eK35cSN z68r(a;&~Pc%TXM}d{9Ju&@3W7;eks+r+-V|&4xMHpehXDfweF%GW%APy{|GVAj>R7 zCTI~hHzg3+H%v}!fm=oH9@R|KJvL0X>lw6~rIeTP!697DAr;QTRtxtC4;+S~%qRF) z6pt(Ssq6@PQ`uCc3GKKdXojd0)A#7^N7LFF)MnFKYL(Q!me$Uq)<|oqRZ@E_)c22z zL-*Zpx9sVTtTL!#RkI?j8`rJ|Lt}GA01?917$-Na|WA z!8F%V^g8%^kdnrpF*f<`-T}q64lq4cP_$u!2=4k0MRxKCB`r266_1NPuwlpdT#3kE zdOvaB9IQCHj@@^`8H1(7`I3)~b;LJGaWyvK0k+&QNo&{%AESGZ;m63C$NA6P?Hj#b z7l1IkomM-H^Pf&5w=%z~QChEfr`E5;}*0 zd3@q|ij@R4b?a#Xjlxgx6^NoIIMvil1hJfM0i&9#_Cs<#3oeIHH;qs?4=iiqYLXUJ z3WU?@QLdfOnCOi%96K|g^X}3(6b@4`x@jRjKE^rtHxT#?W!DtSZWf$MIgvaa0R|&C zH_k&p9pvY`1~PNXls-Zm7Z2|d2c?ybY zOCzGYk*CbVUz&Tl`zms?bX@o%EwtdJ^*s&V==84jNZ(;UnjyUSJ(K!=vmY88nJE7q zWmtNGVqvIH!5XE4{2Y&+fDgtem*|qSAnOb7n*rf#h@Fzz*mV1S&llry;*50tmWM$I zk;@hl!Z)$WS`{SO%x6?gz=q>@e5fzITbWd}w40h}ods0H_M7z$~r|ReNX|J99(PkInRv?1uao`3;sn1EFDNqasUI zH;~st4$8jH@CDM^H%45N#ZVbAEWizM3CtlbH!91II4j}8!hVsW95jJiPNnrKB&DU2 znr06aXZjNO6M1Ueg{6(tpLDDRgF-k}7SIZ{ z;`JRZGV+-e2K_546KqK-Qu0sICB{&|ncCQx#?a|Hx=rGd)zpJu1EizOOMXP;FNm;v zCZWo0A|DXBLS#IpF~%anL^y&uklZlwQI&_&RqI&p@FSaP1bL|aUr~F~;t-fN(wbV7 z`A*#hB9&)CHAGlHu8`^8gAqn1#0n?_0h_$^v?Y!0s}?+609)aBLJAUFy#GtP)*NUg}g)k%}pestsu_uII%U}4xe~#%J<@Ix=Z~ZxCkf; zDy-`{c7v8Ge~prBHy=@90(umS`N?UJ5br2L6`+d$Er7B}Z6fvhjucFHZN%BcDi+Es zHqXeWO?k`4@9R@nJ+E)0o_)RjDMGA|P-8>d_Vq2d<)N<8Yat9FwLN8D9PZ`1a=UYD zk@m{_qPbMAmn)0Q)#?hiYen;;uN@xzGS=V%b9AfmBH8b)BbJsf*w>r(O*sI>!1@iO z9{_zM^F!N(bVac%luQKg3ns=&r1s=Uq7wk`+@Ya{qQ&Nsw0liEYs{KHhyC`CrN^%yDNsoK|9_0ZCnCwwV?FB`p0|#r5i7GQGs%q_( zsFD{mkBf@dnz60C3hsa}e?w(^M8 zI-vDGc!w^ZDI#h-|DqzI44Ym(3hg4kSfDMFi-aXIz!E05bF#vGEI{SlvqP);DqMj7 z-|@mvAZ$0&~r zIPadD5HMbWlZ02qD+vcLf+7EjWXKZ!mN=GhfjHWTW3VFsk+^4CFEu{%@?s{JFtlUX zuMYc7zBe)+v)`)rC?9hBGwHS+21-ly`I!*oUG%x{H4JTJsiKqK1t=~XZ)&x_c)Rl1qL 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - __constants__ = ['downsample'] - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None): - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, block, layers, num_classes=1000, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, - norm_layer=None): - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - dilate=replace_stride_with_dilation[2]) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False): - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def _forward(self, x): - x = self.conv1(x) - print(x.shape) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = torch.flatten(x, 1) - x = self.fc(x) - - return x - - # Allow for accessing forward method in a inherited class - forward = _forward - - -def _resnet(arch, block, layers, pretrained, progress, **kwargs): - model = ResNet(block, layers, **kwargs) - if pretrained: - state_dict = load_state_dict_from_url(model_urls[arch], - progress=progress) - model.load_state_dict(state_dict) - return model - - -def resnet18(pretrained=False, progress=True, **kwargs): - r"""ResNet-18 model from - `"Deep Residual Learning for Image Recognition" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress, - **kwargs) - - -def resnet34(pretrained=False, progress=True, **kwargs): - r"""ResNet-34 model from - `"Deep Residual Learning for Image Recognition" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet50(pretrained=False, progress=True, **kwargs): - r"""ResNet-50 model from - `"Deep Residual Learning for Image Recognition" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet101(pretrained=False, progress=True, **kwargs): - r"""ResNet-101 model from - `"Deep Residual Learning for Image Recognition" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress, - **kwargs) - - -def resnet152(pretrained=False, progress=True, **kwargs): - r"""ResNet-152 model from - `"Deep Residual Learning for Image Recognition" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress, - **kwargs) - - -def resnext50_32x4d(pretrained=False, progress=True, **kwargs): - r"""ResNeXt-50 32x4d model from - `"Aggregated Residual Transformation for Deep Neural Networks" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 4 - return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def resnext101_32x8d(pretrained=False, progress=True, **kwargs): - r"""ResNeXt-101 32x8d model from - `"Aggregated Residual Transformation for Deep Neural Networks" `_ - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 8 - return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -def wide_resnet50_2(pretrained=False, progress=True, **kwargs): - r"""Wide ResNet-50-2 model from - `"Wide Residual Networks" `_ - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def wide_resnet101_2(pretrained=False, progress=True, **kwargs): - r"""Wide ResNet-101-2 model from - `"Wide Residual Networks" `_ - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) \ No newline at end of file diff --git a/sjlee_backup/cats/mod.py b/sjlee_backup/cats/mod.py deleted file mode 100644 index 7ce21fa..0000000 --- a/sjlee_backup/cats/mod.py +++ /dev/null @@ -1,213 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -from torch.autograd import Variable - -r''' -Copy-pasted from GLU-Net -https://github.com/PruneTruong/GLU-Net -''' - - -def conv(in_planes, out_planes, kernel_size=3, stride=1, padding=1, dilation=1, batch_norm=False): - if batch_norm: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, bias=True), - nn.BatchNorm2d(out_planes), - nn.LeakyReLU(0.1, inplace=True)) - else: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, bias=True), - nn.LeakyReLU(0.1)) - - -def predict_flow(in_planes): - return nn.Conv2d(in_planes,2,kernel_size=3,stride=1,padding=1,bias=True) - - -def deconv(in_planes, out_planes, kernel_size=4, stride=2, padding=1): - return nn.ConvTranspose2d(in_planes, out_planes, kernel_size, stride, padding, bias=True) - - -def unnormalise_and_convert_mapping_to_flow(map): - # here map is normalised to -1;1 - # we put it back to 0,W-1, then convert it to flow - B, C, H, W = map.size() - mapping = torch.zeros_like(map) - # mesh grid - mapping[:,0,:,:] = (map[:, 0, :, :].float().clone() + 1) * (W - 1) / 2.0 # unormalise - mapping[:,1,:,:] = (map[:, 1, :, :].float().clone() + 1) * (H - 1) / 2.0 # unormalise - - xx = torch.arange(0, W).view(1,-1).repeat(H,1) - yy = torch.arange(0, H).view(-1,1).repeat(1,W) - xx = xx.view(1,1,H,W).repeat(B,1,1,1) - yy = yy.view(1,1,H,W).repeat(B,1,1,1) - grid = torch.cat((xx,yy),1).float() - - if mapping.is_cuda: - grid = grid.cuda() - flow = mapping - grid - return flow - - -class CorrelationVolume(nn.Module): - """ - Implementation by Ignacio Rocco - paper: https://arxiv.org/abs/1703.05593 - project: https://github.com/ignacio-rocco/cnngeometric_pytorch - """ - - def __init__(self): - super(CorrelationVolume, self).__init__() - - def forward(self, feature_A, feature_B): - b, c, h, w = feature_A.size() - - # reshape features for matrix multiplication - feature_A = feature_A.transpose(2, 3).contiguous().view(b, c, h * w) # shape (b,c,h*w) - feature_B = feature_B.view(b, c, h * w).transpose(1, 2) # shape (b,h*w,c) - feature_mul = torch.bmm(feature_B, feature_A) # shape (b,h*w,h*w) - correlation_tensor = feature_mul.view(b, h, w, h * w).transpose(2, 3).transpose(1, 2) - return correlation_tensor # shape (b,h*w,h,w) - - -class FeatureL2Norm(nn.Module): - """ - Implementation by Ignacio Rocco - paper: https://arxiv.org/abs/1703.05593 - project: https://github.com/ignacio-rocco/cnngeometric_pytorch - """ - def __init__(self): - super(FeatureL2Norm, self).__init__() - - def forward(self, feature, dim=1): - epsilon = 1e-6 - norm = torch.pow(torch.sum(torch.pow(feature, 2), dim) + epsilon, 0.5).unsqueeze(dim).expand_as(feature) - return torch.div(feature, norm) - - -class OpticalFlowEstimator(nn.Module): - - def __init__(self, in_channels, batch_norm): - super(OpticalFlowEstimator, self).__init__() - - dd = np.cumsum([128,128,96,64,32]) - self.conv_0 = conv(in_channels, 128, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_1 = conv(in_channels + dd[0], 128, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_2 = conv(in_channels + dd[1], 96, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_3 = conv(in_channels + dd[2], 64, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_4 = conv(in_channels + dd[3], 32, kernel_size=3, stride=1, batch_norm=batch_norm) - self.predict_flow = predict_flow(in_channels + dd[4]) - - def forward(self, x): - # dense net connection - x = torch.cat((self.conv_0(x), x),1) - x = torch.cat((self.conv_1(x), x),1) - x = torch.cat((self.conv_2(x), x),1) - x = torch.cat((self.conv_3(x), x),1) - x = torch.cat((self.conv_4(x), x),1) - flow = self.predict_flow(x) - return x, flow - - -class OpticalFlowEstimatorNoDenseConnection(nn.Module): - - def __init__(self, in_channels, batch_norm): - super(OpticalFlowEstimatorNoDenseConnection, self).__init__() - self.conv_0 = conv(in_channels, 128, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_1 = conv(128, 128, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_2 = conv(128, 96, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_3 = conv(96, 64, kernel_size=3, stride=1, batch_norm=batch_norm) - self.conv_4 = conv(64, 32, kernel_size=3, stride=1, batch_norm=batch_norm) - self.predict_flow = predict_flow(32) - - def forward(self, x): - x = self.conv_4(self.conv_3(self.conv_2(self.conv_1(self.conv_0(x))))) - flow = self.predict_flow(x) - return x, flow - - -# extracted from DGCNet -def conv_blck(in_channels, out_channels, kernel_size=3, - stride=1, padding=1, dilation=1, bn=False): - if bn: - return nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size, - stride, padding, dilation), - nn.BatchNorm2d(out_channels), - nn.ReLU(inplace=True)) - else: - return nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size, - stride, padding, dilation), - nn.ReLU(inplace=True)) - - -def conv_head(in_channels): - return nn.Conv2d(in_channels, 2, kernel_size=3, padding=1) - - -class CorrespondenceMapBase(nn.Module): - def __init__(self, in_channels, bn=False): - super().__init__() - - def forward(self, x1, x2=None, x3=None): - x = x1 - # concatenating dimensions - if (x2 is not None) and (x3 is None): - x = torch.cat((x1, x2), 1) - elif (x2 is None) and (x3 is not None): - x = torch.cat((x1, x3), 1) - elif (x2 is not None) and (x3 is not None): - x = torch.cat((x1, x2, x3), 1) - - return x - - -class CMDTop(CorrespondenceMapBase): - def __init__(self, in_channels, bn=False): - super().__init__(in_channels, bn) - chan = [128, 128, 96, 64, 32] - self.conv0 = conv_blck(in_channels, chan[0], bn=bn) - self.conv1 = conv_blck(chan[0], chan[1], bn=bn) - self.conv2 = conv_blck(chan[1], chan[2], bn=bn) - self.conv3 = conv_blck(chan[2], chan[3], bn=bn) - self.conv4 = conv_blck(chan[3], chan[4], bn=bn) - self.final = conv_head(chan[-1]) - - def forward(self, x1, x2=None, x3=None): - x = super().forward(x1, x2, x3) - x = self.conv4(self.conv3(self.conv2(self.conv1(self.conv0(x))))) - return self.final(x) - - -def warp(x, flo): - """ - warp an image/tensor (im2) back to im1, according to the optical flow - x: [B, C, H, W] (im2) - flo: [B, 2, H, W] flow - """ - B, C, H, W = x.size() - # mesh grid - xx = torch.arange(0, W).view(1, -1).repeat(H, 1) - yy = torch.arange(0, H).view(-1, 1).repeat(1, W) - xx = xx.view(1, 1, H, W).repeat(B, 1, 1, 1) - yy = yy.view(1, 1, H, W).repeat(B, 1, 1, 1) - grid = torch.cat((xx, yy), 1).float() - - if x.is_cuda: - grid = grid.cuda() - vgrid = grid + flo - # makes a mapping out of the flow - - # scale grid to [-1,1] - vgrid[:, 0, :, :] = 2.0 * vgrid[:, 0, :, :].clone() / max(W - 1, 1) - 1.0 - vgrid[:, 1, :, :] = 2.0 * vgrid[:, 1, :, :].clone() / max(H - 1, 1) - 1.0 - - vgrid = vgrid.permute(0, 2, 3, 1) - - if float(torch.__version__[:3]) >= 1.3: - output = nn.functional.grid_sample(x, vgrid, align_corners=True) - else: - output = nn.functional.grid_sample(x, vgrid) - return output \ No newline at end of file diff --git a/sjlee_backup/loss.py b/sjlee_backup/loss.py deleted file mode 100644 index 6edbb37..0000000 --- a/sjlee_backup/loss.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch - -def loss_superglue(scores, all_matches): - # check if indexed correctly - loss = [] - loss.append(torch.tensor(0.).cuda()) - for i in range(len(all_matches[0])): - x = all_matches[0][i][0] - y = all_matches[0][i][1] - if x>=len(scores[0]) or y>=len(scores[0][0]):continue - - loss.append(-torch.log( scores[0][x][y] )) # check batch size == 1 ? - # for p0 in unmatched0: - # loss += -torch.log(scores[0][p0][-1]) - # for p1 in unmatched1: - # loss += -torch.log(scores[0][-1][p1]) - loss_mean = torch.mean(torch.stack(loss)) - loss_mean = torch.reshape(loss_mean, (1, -1)) - return loss_mean[0] diff --git a/sjlee_backup/losssuperglue.py b/sjlee_backup/losssuperglue.py deleted file mode 100644 index cecca4e..0000000 --- a/sjlee_backup/losssuperglue.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch - -def loss_superglue(scores, all_matches): - # check if indexed correctly - loss = [] - loss.append(torch.tensor(0.).cuda()) - for i in range(len(all_matches[0])): - x = all_matches[0][i][0] - y = all_matches[0][i][1] - - if x>=len(scores[0]) or y>=len(scores[0][0]):continue - loss.append(-torch.log( scores[0][x][y] )) # check batch size == 1 ? - # for p0 in unmatched0: - # loss += -torch.log(scores[0][p0][-1]) - # for p1 in unmatched1: - # loss += -torch.log(scores[0][-1][p1]) - loss_mean = torch.mean(torch.stack(loss)) - loss_mean = torch.reshape(loss_mean, (1, -1)) - return loss_mean[0] diff --git a/sjlee_backup/superglue.py b/sjlee_backup/superglue.py deleted file mode 100644 index 6837d47..0000000 --- a/sjlee_backup/superglue.py +++ /dev/null @@ -1,359 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from copy import deepcopy -from pathlib import Path -import torch -from torch import nn - - -def MLP(channels: list, do_bn=True): - """ Multi-layer perceptron """ - n = len(channels) - layers = [] - for i in range(1, n): - layers.append( - nn.Conv1d(channels[i - 1], channels[i], kernel_size=1, bias=True)) - if i < (n-1): - if do_bn: - # layers.append(nn.BatchNorm1d(channels[i])) - layers.append(nn.InstanceNorm1d(channels[i])) - layers.append(nn.ReLU()) - return nn.Sequential(*layers) - - -def normalize_keypoints(kpts, image_shape): - """ Normalize keypoints locations based on image image_shape""" - _, _, height, width = image_shape - one = kpts.new_tensor(1) - size = torch.stack([one*width, one*height])[None] - center = size / 2 - scaling = size.max(1, keepdim=True).values * 0.7 - return (kpts - center[:, None, :]) / scaling[:, None, :] - - -class KeypointEncoder(nn.Module): - """ Joint encoding of visual appearance and location using MLPs""" - def __init__(self, feature_dim, layers): - super().__init__() - self.encoder = MLP([3] + layers + [feature_dim]) - nn.init.constant_(self.encoder[-1].bias, 0.0) - - def forward(self, kpts, scores): - inputs = [kpts.transpose(1, 2), scores.unsqueeze(1)] - return self.encoder(torch.cat(inputs, dim=1)) - - -def attention(query, key, value): - dim = query.shape[1] - scores = torch.einsum('bdhn,bdhm->bhnm', query, key) / dim**.5 - prob = torch.nn.functional.softmax(scores, dim=-1) - return torch.einsum('bhnm,bdhm->bdhn', prob, value), prob - - -class MultiHeadedAttention(nn.Module): - """ Multi-head attention to increase model expressivitiy """ - def __init__(self, num_heads: int, d_model: int): - super().__init__() - assert d_model % num_heads == 0 - self.dim = d_model // num_heads - self.num_heads = num_heads - self.merge = nn.Conv1d(d_model, d_model, kernel_size=1) - self.proj = nn.ModuleList([deepcopy(self.merge) for _ in range(3)]) - - def forward(self, query, key, value): - batch_dim = query.size(0) - query, key, value = [l(x).view(batch_dim, self.dim, self.num_heads, -1) - for l, x in zip(self.proj, (query, key, value))] - x, prob = attention(query, key, value) - self.prob.append(prob) - return self.merge(x.contiguous().view(batch_dim, self.dim*self.num_heads, -1)) - - -class AttentionalPropagation(nn.Module): - def __init__(self, feature_dim: int, num_heads: int): - super().__init__() - self.attn = MultiHeadedAttention(num_heads, feature_dim) - self.mlp = MLP([feature_dim*2, feature_dim*2, feature_dim]) - nn.init.constant_(self.mlp[-1].bias, 0.0) - - def forward(self, x, source): - message = self.attn(x, source, source) - return self.mlp(torch.cat([x, message], dim=1)) - - -class AttentionalGNN(nn.Module): - def __init__(self, feature_dim: int, layer_names: list): - super().__init__() - self.layers = nn.ModuleList([ - AttentionalPropagation(feature_dim, 4) - for _ in range(len(layer_names))]) - self.names = layer_names - - def forward(self, desc0, desc1): - for layer, name in zip(self.layers, self.names): - layer.attn.prob = [] - if name == 'cross': - src0, src1 = desc1, desc0 - else: # if name == 'self': - src0, src1 = desc0, desc1 - delta0, delta1 = layer(desc0, src0), layer(desc1, src1) - desc0, desc1 = (desc0 + delta0), (desc1 + delta1) - return desc0, desc1 - - -def log_sinkhorn_iterations(Z, log_mu, log_nu, iters: int): - """ Perform Sinkhorn Normalization in Log-space for stability""" - u, v = torch.zeros_like(log_mu), torch.zeros_like(log_nu) - for _ in range(iters): - u = log_mu - torch.logsumexp(Z + v.unsqueeze(1), dim=2) - v = log_nu - torch.logsumexp(Z + u.unsqueeze(2), dim=1) - return Z + u.unsqueeze(2) + v.unsqueeze(1) - - -def log_optimal_transport(scores, alpha, iters: int): - """ Perform Differentiable Optimal Transport in Log-space for stability""" - b, m, n = scores.shape - one = scores.new_tensor(1) - ms, ns = (m*one).to(scores), (n*one).to(scores) - - bins0 = alpha.expand(b, m, 1) - bins1 = alpha.expand(b, 1, n) - alpha = alpha.expand(b, 1, 1) - - couplings = torch.cat([torch.cat([scores, bins0], -1), - torch.cat([bins1, alpha], -1)], 1) - - norm = - (ms + ns).log() - log_mu = torch.cat([norm.expand(m), ns.log()[None] + norm]) - log_nu = torch.cat([norm.expand(n), ms.log()[None] + norm]) - log_mu, log_nu = log_mu[None].expand(b, -1), log_nu[None].expand(b, -1) - - Z = log_sinkhorn_iterations(couplings, log_mu, log_nu, iters) - Z = Z - norm # multiply probabilities by M+N - return Z - - -def arange_like(x, dim: int): - return x.new_ones(x.shape[dim]).cumsum(0) - 1 # traceable in 1.1 - - -class SuperGlue(nn.Module): - """SuperGlue feature matching middle-end - Given two sets of keypoints and locations, we determine the - correspondences by: - 1. Keypoint Encoding (normalization + visual feature and location fusion) - 2. Graph Neural Network with multiple self and cross-attention layers - 3. Final projection layer - 4. Optimal Transport Layer (a differentiable Hungarian matching algorithm) - 5. Thresholding matrix based on mutual exclusivity and a match_threshold - The correspondence ids use -1 to indicate non-matching points. - Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew - Rabinovich. SuperGlue: Learning Feature Matching with Graph Neural - Networks. In CVPR, 2020. https://arxiv.org/abs/1911.11763 - """ - default_config = { - 'descriptor_dim': 256, - 'weights': 'indoor', - 'keypoint_encoder': [32, 64, 128, 256], - 'GNN_layers': ['self', 'cross'] * 9, - 'sinkhorn_iterations': 100, - 'match_threshold': 0.2, - } - - def __init__(self, config): - super().__init__() - self.config = {**self.default_config, **config} - - self.kenc = KeypointEncoder( - self.config['descriptor_dim'], self.config['keypoint_encoder']) - - self.gnn = AttentionalGNN( - self.config['descriptor_dim'], self.config['GNN_layers']) - - self.final_proj = nn.Conv1d( - self.config['descriptor_dim'], self.config['descriptor_dim'], - kernel_size=1, bias=True) - - bin_score = torch.nn.Parameter(torch.tensor(1.)) - self.register_parameter('bin_score', bin_score) - - # assert self.config['weights'] in ['indoor', 'outdoor'] - # path = Path(__file__).parent - # path = path / 'weights/superglue_{}.pth'.format(self.config['weights']) - # self.load_state_dict(torch.load(path)) - # print('Loaded SuperGlue model (\"{}\" weights)'.format( - # self.config['weights'])) - - def forward(self, data): - """Run SuperGlue on a pair of keypoints and descriptors""" - desc0, desc1 = data['descriptors0'], data['descriptors1'] - kpts0, kpts1 = data['keypoints0'], data['keypoints1'] - - """ - desc0 = desc0.transpose(0,1) - desc1 = desc1.transpose(0,1) - kpts0 = torch.reshape(kpts0, (1, -1, 2)) - kpts1 = torch.reshape(kpts1, (1, -1, 2)) - """ - - if kpts0.shape[1] == 0 or kpts1.shape[1] == 0: # no keypoints - shape0, shape1 = kpts0.shape[:-1], kpts1.shape[:-1] - return { - 'matches0': kpts0.new_full(shape0, -1, dtype=torch.int)[0], - 'matches1': kpts1.new_full(shape1, -1, dtype=torch.int)[0], - 'matching_scores0': kpts0.new_zeros(shape0)[0], - 'matching_scores1': kpts1.new_zeros(shape1)[0], - 'skip_train': True - } - - """ - file_name = data['file_name'] - all_matches = data['all_matches'].permute(1,2,0) # shape=torch.Size([1, 87, 2]) - """ - - # Keypoint normalization. - kpts0 = normalize_keypoints(kpts0, data['image0'].shape) - kpts1 = normalize_keypoints(kpts1, data['image1'].shape) - - # Keypoint MLP encoder. - """ - desc0 = desc0 + self.kenc(kpts0, torch.transpose(data['scores0'], 0, 1)) - desc1 = desc1 + self.kenc(kpts1, torch.transpose(data['scores1'], 0, 1)) - """ - desc0 = desc0 + self.kenc(kpts0, data['scores0']) - desc1 = desc1 + self.kenc(kpts1, data['scores1']) - - # Multi-layer Transformer network. - desc0, desc1 = self.gnn(desc0, desc1) - - # Final MLP projection. - mdesc0, mdesc1 = self.final_proj(desc0), self.final_proj(desc1) - - # Compute matching descriptor distance. - scores = torch.einsum('bdn,bdm->bnm', mdesc0, mdesc1) - scores = scores / self.config['descriptor_dim']**.5 - - # Run the optimal transport. - scores = log_optimal_transport( - scores, self.bin_score, - iters=self.config['sinkhorn_iterations']) - - # Get the matches with score above "match_threshold". - max0, max1 = scores[:, :-1, :-1].max(2), scores[:, :-1, :-1].max(1) - indices0, indices1 = max0.indices, max1.indices - mutual0 = arange_like(indices0, 1)[None] == indices1.gather(1, indices0) - mutual1 = arange_like(indices1, 1)[None] == indices0.gather(1, indices1) - zero = scores.new_tensor(0) - mscores0 = torch.where(mutual0, max0.values.exp(), zero) - mscores1 = torch.where(mutual1, mscores0.gather(1, indices1), zero) - valid0 = mutual0 & (mscores0 > self.config['match_threshold']) - valid1 = mutual1 & valid0.gather(1, indices1) - indices0 = torch.where(valid0, indices0, indices0.new_tensor(-1)) - indices1 = torch.where(valid1, indices1, indices1.new_tensor(-1)) - - """ - # check if indexed correctly - loss = [] - for i in range(len(all_matches[0])): - x = all_matches[0][i][0] - y = all_matches[0][i][1] - loss.append(-torch.log( scores[0][x][y].exp() )) # check batch size == 1 ? - # for p0 in unmatched0: - # loss += -torch.log(scores[0][p0][-1]) - # for p1 in unmatched1: - # loss += -torch.log(scores[0][-1][p1]) - loss_mean = torch.mean(torch.stack(loss)) - loss_mean = torch.reshape(loss_mean, (1, -1)) - """ - - return { - 'matches0': indices0[0], # use -1 for invalid match - 'matches1': indices1[0], # use -1 for invalid match - 'matching_scores0': mscores0[0], - 'matching_scores1': mscores1[0], - # 'loss': loss_mean[0], - 'skip_train': False - } - - # scores big value or small value means confidence? log can't take neg value - -if __name__ == '__main__': - from superpoint import SuperPoint - - config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1 - }, - 'superglue': { - 'weights': 'indoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2, - } - } - - data = { - 'image0': torch.randn(1, 1, 512, 512), - 'image1': torch.randn(1, 1, 512, 512) - } - - superpoint = SuperPoint(config.get('superpoint', {})) - - output1 = superpoint({'image': data['image0']}) - output2 = superpoint({'image': data['image1']}) - - pred = {} - - pred = {**pred, **{k+'0': v for k, v in output1.items()}} - pred = {**pred, **{k+'1': v for k, v in output2.items()}} - - data = {**data, **pred} - - for k in data: - if isinstance(data[k], (list, tuple)): - data[k] = torch.stack(data[k]) - - print(data['descriptors0'].shape) - superglue = SuperGlue(config.get('superglue', {})) - superglue(data) \ No newline at end of file diff --git a/sjlee_backup/superglue2.py b/sjlee_backup/superglue2.py deleted file mode 100644 index 5bd4028..0000000 --- a/sjlee_backup/superglue2.py +++ /dev/null @@ -1,326 +0,0 @@ - -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from copy import deepcopy -from pathlib import Path -from typing import List, Tuple - -import torch -from torch import nn - - -def MLP(channels: List[int], do_bn: bool = True) -> nn.Module: - """ Multi-layer perceptron """ - n = len(channels) - layers = [] - for i in range(1, n): - layers.append( - nn.Conv1d(channels[i - 1], channels[i], kernel_size=1, bias=True)) - if i < (n-1): - if do_bn: - layers.append(nn.BatchNorm1d(channels[i])) - layers.append(nn.ReLU()) - return nn.Sequential(*layers) - - -def normalize_keypoints(kpts, image_shape): - """ Normalize keypoints locations based on image image_shape""" - _, _, height, width = image_shape - one = kpts.new_tensor(1) - size = torch.stack([one*width, one*height])[None] - center = size / 2 - scaling = size.max(1, keepdim=True).values * 0.7 - return (kpts - center[:, None, :]) / scaling[:, None, :] - - -class KeypointEncoder(nn.Module): - """ Joint encoding of visual appearance and location using MLPs""" - def __init__(self, feature_dim: int, layers: List[int]) -> None: - super().__init__() - self.encoder = MLP([3] + layers + [feature_dim]) - nn.init.constant_(self.encoder[-1].bias, 0.0) - - def forward(self, kpts, scores): - inputs = [kpts.transpose(1, 2), scores.unsqueeze(1)] - return self.encoder(torch.cat(inputs, dim=1)) - - -def attention(query: torch.Tensor, key: torch.Tensor, value: torch.Tensor) -> Tuple[torch.Tensor,torch.Tensor]: - dim = query.shape[1] - scores = torch.einsum('bdhn,bdhm->bhnm', query, key) / dim**.5 - prob = torch.nn.functional.softmax(scores, dim=-1) - return torch.einsum('bhnm,bdhm->bdhn', prob, value), prob - - -class MultiHeadedAttention(nn.Module): - """ Multi-head attention to increase model expressivitiy """ - def __init__(self, num_heads: int, d_model: int): - super().__init__() - assert d_model % num_heads == 0 - self.dim = d_model // num_heads - self.num_heads = num_heads - self.merge = nn.Conv1d(d_model, d_model, kernel_size=1) - self.proj = nn.ModuleList([deepcopy(self.merge) for _ in range(3)]) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor) -> torch.Tensor: - batch_dim = query.size(0) - query, key, value = [l(x).view(batch_dim, self.dim, self.num_heads, -1) - for l, x in zip(self.proj, (query, key, value))] - x, _ = attention(query, key, value) - return self.merge(x.contiguous().view(batch_dim, self.dim*self.num_heads, -1)) - - -class AttentionalPropagation(nn.Module): - def __init__(self, feature_dim: int, num_heads: int): - super().__init__() - self.attn = MultiHeadedAttention(num_heads, feature_dim) - self.mlp = MLP([feature_dim*2, feature_dim*2, feature_dim]) - nn.init.constant_(self.mlp[-1].bias, 0.0) - - def forward(self, x: torch.Tensor, source: torch.Tensor) -> torch.Tensor: - message = self.attn(x, source, source) - return self.mlp(torch.cat([x, message], dim=1)) - - -class AttentionalGNN(nn.Module): - def __init__(self, feature_dim: int, layer_names: List[str]) -> None: - super().__init__() - self.layers = nn.ModuleList([ - AttentionalPropagation(feature_dim, 4) - for _ in range(len(layer_names))]) - self.names = layer_names - - def forward(self, desc0: torch.Tensor, desc1: torch.Tensor) -> Tuple[torch.Tensor,torch.Tensor]: - for layer, name in zip(self.layers, self.names): - if name == 'cross': - src0, src1 = desc1, desc0 - else: # if name == 'self': - src0, src1 = desc0, desc1 - delta0, delta1 = layer(desc0, src0), layer(desc1, src1) - desc0, desc1 = (desc0 + delta0), (desc1 + delta1) - return desc0, desc1 - - -def log_sinkhorn_iterations(Z: torch.Tensor, log_mu: torch.Tensor, log_nu: torch.Tensor, iters: int) -> torch.Tensor: - """ Perform Sinkhorn Normalization in Log-space for stability""" - u, v = torch.zeros_like(log_mu), torch.zeros_like(log_nu) - for _ in range(iters): - u = log_mu - torch.logsumexp(Z + v.unsqueeze(1), dim=2) - v = log_nu - torch.logsumexp(Z + u.unsqueeze(2), dim=1) - - return Z + u.unsqueeze(2) + v.unsqueeze(1) - - -def log_optimal_transport(scores: torch.Tensor, alpha: torch.Tensor, iters: int) -> torch.Tensor: - """ Perform Differentiable Optimal Transport in Log-space for stability""" - b, m, n = scores.shape - one = scores.new_tensor(1) - ms, ns = (m*one).to(scores), (n*one).to(scores) - - bins0 = alpha.expand(b, m, 1) - bins1 = alpha.expand(b, 1, n) - alpha = alpha.expand(b, 1, 1) - - couplings = torch.cat([torch.cat([scores, bins0], -1), - torch.cat([bins1, alpha], -1)], 1) - - norm = - (ms + ns).log() - log_mu = torch.cat([norm.expand(m), ns.log()[None] + norm]) - log_nu = torch.cat([norm.expand(n), ms.log()[None] + norm]) - log_mu, log_nu = log_mu[None].expand(b, -1), log_nu[None].expand(b, -1) - - Z = log_sinkhorn_iterations(couplings, log_mu, log_nu, iters) - Z = Z - norm # multiply probabilities by M+N - return Z - - -def arange_like(x, dim: int): - return x.new_ones(x.shape[dim]).cumsum(0) - 1 # traceable in 1.1 - - -class SuperGlue(nn.Module): - """SuperGlue feature matching middle-end - Given two sets of keypoints and locations, we determine the - correspondences by: - 1. Keypoint Encoding (normalization + visual feature and location fusion) - 2. Graph Neural Network with multiple self and cross-attention layers - 3. Final projection layer - 4. Optimal Transport Layer (a differentiable Hungarian matching algorithm) - 5. Thresholding matrix based on mutual exclusivity and a match_threshold - The correspondence ids use -1 to indicate non-matching points. - Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew - Rabinovich. SuperGlue: Learning Feature Matching with Graph Neural - Networks. In CVPR, 2020. https://arxiv.org/abs/1911.11763 - """ - default_config = { - 'descriptor_dim': 256, - 'weights': 'indoor', - 'keypoint_encoder': [32, 64, 128, 256], - 'GNN_layers': ['self', 'cross'] * 9, - 'sinkhorn_iterations': 100, - 'match_threshold': 0.2, - } - - def __init__(self, config): - super().__init__() - self.config = {**self.default_config, **config} - - self.kenc = KeypointEncoder( - self.config['descriptor_dim'], self.config['keypoint_encoder']) - - self.gnn = AttentionalGNN( - feature_dim=self.config['descriptor_dim'], layer_names=self.config['GNN_layers']) - - self.final_proj = nn.Conv1d( - self.config['descriptor_dim'], self.config['descriptor_dim'], - kernel_size=1, bias=True) - - bin_score = torch.nn.Parameter(torch.tensor(1.)) - self.register_parameter('bin_score', bin_score) - - assert self.config['weights'] in ['indoor', 'outdoor'] - path = Path(__file__).parent - path = path / 'weights/superglue_{}.pth'.format(self.config['weights']) - self.load_state_dict(torch.load(str(path))) - print('Loaded SuperGlue model (\"{}\" weights)'.format( - self.config['weights'])) - - def forward(self, data): - """Run SuperGlue on a pair of keypoints and descriptors""" - desc0, desc1 = data['descriptors0'], data['descriptors1'] - kpts0, kpts1 = data['keypoints0'], data['keypoints1'] - - if kpts0.shape[1] == 0 or kpts1.shape[1] == 0: # no keypoints - shape0, shape1 = kpts0.shape[:-1], kpts1.shape[:-1] - return { - 'matches0': kpts0.new_full(shape0, -1, dtype=torch.int), - 'matches1': kpts1.new_full(shape1, -1, dtype=torch.int), - 'matching_scores0': kpts0.new_zeros(shape0), - 'matching_scores1': kpts1.new_zeros(shape1), - } - - # Keypoint normalization. - kpts0 = normalize_keypoints(kpts0, data['image0'].shape) - kpts1 = normalize_keypoints(kpts1, data['image1'].shape) - - # Keypoint MLP encoder. - desc0 = desc0 + self.kenc(kpts0, data['scores0']) - desc1 = desc1 + self.kenc(kpts1, data['scores1']) - - # Multi-layer Transformer network. - desc0, desc1 = self.gnn(desc0, desc1) - - # Final MLP projection. - mdesc0, mdesc1 = self.final_proj(desc0), self.final_proj(desc1) - - # Compute matching descriptor distance. - scores = torch.einsum('bdn,bdm->bnm', mdesc0, mdesc1) - scores = scores / self.config['descriptor_dim']**.5 - - print(scores.shape) - - # Run the optimal transport. - scores = log_optimal_transport( - scores, self.bin_score, - iters=self.config['sinkhorn_iterations']) - - # Get the matches with score above "match_threshold". - max0, max1 = scores[:, :-1, :-1].max(2), scores[:, :-1, :-1].max(1) - indices0, indices1 = max0.indices, max1.indices - mutual0 = arange_like(indices0, 1)[None] == indices1.gather(1, indices0) - mutual1 = arange_like(indices1, 1)[None] == indices0.gather(1, indices1) - zero = scores.new_tensor(0) - mscores0 = torch.where(mutual0, max0.values.exp(), zero) - mscores1 = torch.where(mutual1, mscores0.gather(1, indices1), zero) - valid0 = mutual0 & (mscores0 > self.config['match_threshold']) - valid1 = mutual1 & valid0.gather(1, indices1) - indices0 = torch.where(valid0, indices0, indices0.new_tensor(-1)) - indices1 = torch.where(valid1, indices1, indices1.new_tensor(-1)) - - print(scores.shape) - return { - 'matches0': indices0, # use -1 for invalid match - 'matches1': indices1, # use -1 for invalid match - 'matching_scores0': mscores0, - 'matching_scores1': mscores1, - } - -if __name__ == '__main__': - from superpoint import SuperPoint - - config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1 - }, - 'superglue': { - 'weights': 'indoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2, - } - } - - data = { - 'image0': torch.randn(1, 1, 512, 512), - 'image1': torch.randn(1, 1, 512, 512) - } - - superpoint = SuperPoint(config.get('superpoint', {})) - - output1 = superpoint({'image': data['image0']}) - output2 = superpoint({'image': data['image1']}) - - pred = {} - - pred = {**pred, **{k+'0': v for k, v in output1.items()}} - pred = {**pred, **{k+'1': v for k, v in output2.items()}} - - data = {**data, **pred} - - for k in data: - if isinstance(data[k], (list, tuple)): - data[k] = torch.stack(data[k]) - - superglue = SuperGlue(config.get('superglue', {})) - output = superglue(data) diff --git a/sjlee_backup/superpoint.py b/sjlee_backup/superpoint.py deleted file mode 100644 index 14a07fd..0000000 --- a/sjlee_backup/superpoint.py +++ /dev/null @@ -1,222 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from pathlib import Path -import torch -from torch import nn - -def simple_nms(scores, nms_radius: int): - """ Fast Non-maximum suppression to remove nearby points """ - assert(nms_radius >= 0) - - def max_pool(x): - return torch.nn.functional.max_pool2d( - x, kernel_size=nms_radius*2+1, stride=1, padding=nms_radius) - - zeros = torch.zeros_like(scores) - max_mask = scores == max_pool(scores) - for _ in range(2): - supp_mask = max_pool(max_mask.float()) > 0 - supp_scores = torch.where(supp_mask, zeros, scores) - new_max_mask = supp_scores == max_pool(supp_scores) - max_mask = max_mask | (new_max_mask & (~supp_mask)) - return torch.where(max_mask, scores, zeros) - - -def remove_borders(keypoints, scores, border: int, height: int, width: int): - """ Removes keypoints too close to the border """ - mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (height - border)) - mask_w = (keypoints[:, 1] >= border) & (keypoints[:, 1] < (width - border)) - mask = mask_h & mask_w - return keypoints[mask], scores[mask] - - -def top_k_keypoints(keypoints, scores, k: int): - if k >= len(keypoints): - return keypoints, scores - scores, indices = torch.topk(scores, k, dim=0) - return keypoints[indices], scores - - -def sample_descriptors(keypoints, descriptors, s: int = 8): - """ Interpolate descriptors at keypoint locations """ - b, c, h, w = descriptors.shape - keypoints = keypoints - s / 2 + 0.5 - keypoints /= torch.tensor([(w*s - s/2 - 0.5), (h*s - s/2 - 0.5)], - ).to(keypoints)[None] - keypoints = keypoints*2 - 1 # normalize to (-1, 1) - args = {'align_corners': True} if int(torch.__version__[2]) > 2 else {} - descriptors = torch.nn.functional.grid_sample( - descriptors, keypoints.view(b, 1, -1, 2), mode='bilinear', **args) - descriptors = torch.nn.functional.normalize( - descriptors.reshape(b, c, -1), p=2, dim=1) - return descriptors - - -class SuperPoint(nn.Module): - """SuperPoint Convolutional Detector and Descriptor - SuperPoint: Self-Supervised Interest Point Detection and - Description. Daniel DeTone, Tomasz Malisiewicz, and Andrew - Rabinovich. In CVPRW, 2019. https://arxiv.org/abs/1712.07629 - """ - default_config = { - 'descriptor_dim': 256, - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1, - 'remove_borders': 4, - } - - def __init__(self, config): - super().__init__() - self.config = {**self.default_config, **config} - - self.relu = nn.ReLU(inplace=True) - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - c1, c2, c3, c4, c5 = 64, 64, 128, 128, 256 - - self.conv1a = nn.Conv2d(1, c1, kernel_size=3, stride=1, padding=1) - self.conv1b = nn.Conv2d(c1, c1, kernel_size=3, stride=1, padding=1) - self.conv2a = nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1) - self.conv2b = nn.Conv2d(c2, c2, kernel_size=3, stride=1, padding=1) - self.conv3a = nn.Conv2d(c2, c3, kernel_size=3, stride=1, padding=1) - self.conv3b = nn.Conv2d(c3, c3, kernel_size=3, stride=1, padding=1) - self.conv4a = nn.Conv2d(c3, c4, kernel_size=3, stride=1, padding=1) - self.conv4b = nn.Conv2d(c4, c4, kernel_size=3, stride=1, padding=1) - - self.convPa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convPb = nn.Conv2d(c5, 65, kernel_size=1, stride=1, padding=0) - - self.convDa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1) - self.convDb = nn.Conv2d( - c5, self.config['descriptor_dim'], - kernel_size=1, stride=1, padding=0) - - path = Path(__file__).parent / 'weights/superpoint_v1.pth' - self.load_state_dict(torch.load(str(path))) - - mk = self.config['max_keypoints'] - if mk == 0 or mk < -1: - raise ValueError('\"max_keypoints\" must be positive or \"-1\"') - - print('Loaded SuperPoint model') - - def forward(self, data): - """ Compute keypoints, scores, descriptors for image """ - # Shared Encoder - x = self.relu(self.conv1a(data['image'])) - x = self.relu(self.conv1b(x)) - x = self.pool(x) - x = self.relu(self.conv2a(x)) - x = self.relu(self.conv2b(x)) - x = self.pool(x) - x = self.relu(self.conv3a(x)) - x = self.relu(self.conv3b(x)) - x = self.pool(x) - x = self.relu(self.conv4a(x)) - x = self.relu(self.conv4b(x)) - - # Compute the dense keypoint scores - cPa = self.relu(self.convPa(x)) - scores = self.convPb(cPa) - scores = torch.nn.functional.softmax(scores, 1)[:, :-1] - b, _, h, w = scores.shape - scores = scores.permute(0, 2, 3, 1).reshape(b, h, w, 8, 8) - scores = scores.permute(0, 1, 3, 2, 4).reshape(b, h*8, w*8) - scores = simple_nms(scores, self.config['nms_radius']) - - # Extract keypoints - keypoints = [ - torch.nonzero(s > self.config['keypoint_threshold']) - for s in scores] - scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)] - - # Discard keypoints near the image borders - keypoints, scores = list(zip(*[ - remove_borders(k, s, self.config['remove_borders'], h*8, w*8) - for k, s in zip(keypoints, scores)])) - - # Keep the k keypoints with highest score - if self.config['max_keypoints'] >= 0: - keypoints, scores = list(zip(*[ - top_k_keypoints(k, s, self.config['max_keypoints']) - for k, s in zip(keypoints, scores)])) - - # Convert (h, w) to (x, y) - keypoints = [torch.flip(k, [1]).float() for k in keypoints] - - # Compute the dense descriptors - cDa = self.relu(self.convDa(x)) - descriptors = self.convDb(cDa) - descriptors = torch.nn.functional.normalize(descriptors, p=2, dim=1) - - # Extract descriptors - descriptors = [sample_descriptors(k[None], d[None], 8)[0] - for k, d in zip(keypoints, descriptors)] - - return { - 'keypoints': keypoints, - 'scores': scores, - 'descriptors': descriptors, - } - -if __name__ == '__main__': - config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': -1 - }, - 'superglue': { - 'weights': 'indoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2, - } - } - - test_img = torch.randn(1, 1, 512, 512) - data = {'image': test_img} - - superpoint = SuperPoint(config.get('superpoint', {})) - output = superpoint(data) - - print(output['keypoints'][0].shape, output['descriptors'][0].shape) \ No newline at end of file diff --git a/sjlee_backup/train_pseudo.py b/sjlee_backup/train_pseudo.py deleted file mode 100644 index 3c09bf4..0000000 --- a/sjlee_backup/train_pseudo.py +++ /dev/null @@ -1,41 +0,0 @@ - -""" -1. config 아래와 같이 설정 -2. weights은 상황에 맞게 indoor, outdoor 설정해주어야 함 -config = { - 'superpoint': { - 'nms_radius': 4, - 'keypoint_threshold': 0.005, - 'max_keypoints': 1024 - }, - 'superglue': { - 'weights': 'outdoor', - 'sinkhorn_iterations': 20, - 'match_threshold':0.2 - } - } -""" - -""" -# start training -for epoch in range(1, opt.epoch+1): - epoch_loss = 0 - superglue.double().train() - for i, pred in enumerate(train_loader): - for k in pred: - if k != 'file_name' and k!='image0' and k!='image1': - if type(pred[k]) == torch.Tensor: - pred[k] = Variable(pred[k].cuda()) - else: - pred[k] = Variable(torch.stack(pred[k]).cuda()) - - # =========== new code =============== # - scores, data = superglue(pred) - loss = loss_superglue(scores, data['all_matches'].permute(1, 2, 0)) - - for k, v in pred.items(): - pred[k] = v[0] - pred = {**pred, **data, **{'loss', loss}} - - # ... keep going -""" \ No newline at end of file From b4a5a11caaa256ff0cc7d7ffa7f36fc51cbfbd26 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Sat, 31 Dec 2022 15:11:25 +0900 Subject: [PATCH 8/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index daa2ac9..427b063 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Structure of Transformer Aggregator is illustrated below: ![aggregator](fig/aggregator.png) # Training -To train the SuperGlue with default parameters, run the following command: +To train the SuperCATs with default parameters, run the following command: ``` python train.py ``` From 41af8b0f20373e34d1d2b95bc808ae2f0d99fd43 Mon Sep 17 00:00:00 2001 From: KU-CVLAB <96568164+KU-CVLAB@users.noreply.github.com> Date: Sat, 31 Dec 2022 19:41:15 +0900 Subject: [PATCH 9/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 427b063..b8c66f2 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# SuperCATs +# **SuperCATs** : Cost Aggregation with Transformers for Sparse Correspondence For more information, check out the paper on [[paper link]](https://ieeexplore.ieee.org/document/9954872). Also check out project page here [[Project Page link]](https://ku-cvlab.github.io/SuperCATs/).
*This paper is accepted in ICCE-Asia'22*