Skip to content
This repository was archived by the owner on Apr 26, 2019. It is now read-only.

[FIX #185] Readd bounty label#235

Open
vitvly wants to merge 14 commits intodevelopfrom
feature/readd-bounty-#185
Open

[FIX #185] Readd bounty label#235
vitvly wants to merge 14 commits intodevelopfrom
feature/readd-bounty-#185

Conversation

@vitvly
Copy link

@vitvly vitvly commented Jan 25, 2018

FIX #185

Description

This fix does the following:

  1. Listen to unlabeled events from GitHub API.
  2. If such event is received, find the associated issue in the DB and remove it only if funds are zero (currently it checks the field value_usd field). This removes the bounty from SOB.
  3. If it is successfully deleted from the DB, post to GitHub API a request to delete a comment. This will remove a bounty comment on GitHub.
  4. No actions on part of the contract itself are done.

Questions

  1. Is value_usd the correct field to check against when removing a bounty?
  2. Should anything be done with regards to the bounty contract when the parent comment is deleted? If so, what messages should be sent?

Status: Finished.

@vitvly vitvly force-pushed the develop branch 2 times, most recently from 2b88442 to 2f24317 Compare January 26, 2018 16:38
@vitvly
Copy link
Author

vitvly commented Jan 30, 2018

Changed the comment deletion logic in the DB so that it moves a deleted record to issues_archived table instead of dropping it altogether.

@vitvly vitvly requested review from oskarth and pablodip January 30, 2018 16:22
@vitvly vitvly self-assigned this Jan 30, 2018
@pablodip
Copy link
Contributor

Not sure about the current strategy, since having a similar table creates dependency on the original one, so when we want to modify/split the original table we would also have to do it with the archived table, and in the archived data.

The current goal is to just do not lose information, so I agree with the archive concept, just how about implementing it with a more general approach, for example an archive table with four columns, id, type, archived_at and data, and data could just be the edn of the data that we want to archive? That way it would be more open to archive anything, and we would not have the dependency with the info that we want to archive, so for example the archive would work with any version of the original data. The archived data would not be queriable from SQL, but we would need to take it from there and query it in Clojure, but it is an archive in the end.

Anyway if we seriously aim to not lose information and query better all data over time, that is, to have a time model for the data, we should probably consider using Datomic in the future.

Copy link
Contributor

@oskarth oskarth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some general comments, no strong opinions on implementation

@@ -0,0 +1 @@
DROP TABLE issues_archived;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems safer to rename it and leave it be for a while, then do cleanup task once it is 100% it works well in both test and production.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a migration down, so why is the alter table really needed? We are not migrating down in production.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not decided on this. Suggest to keep it as is, might help during testing on develop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just because this breaks migration semantics I believe. To me migrations should be able to run up and down several times, and this would break that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got you. Sounds reasonable. Changed to DROP.

(when (= "reopened" action)
(handle-issue-reopened webhook-payload)))
(ok))
(case action
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default case?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

Vitaliy Vlasov added 3 commits February 6, 2018 15:59
use archived issues as a contract pool when deploying new contracts
@vitvly
Copy link
Author

vitvly commented Feb 6, 2018

Changed the code so that deleted issues go to archive table. When deploying new contracts, this table is checked to fetch free contracts that haven't been assigned to any issue. So it fulfills a double purpose: that of an archive and a contract pool.

('user');

CREATE TABLE archive (
type TEXT REFERENCES data_types(name),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using an enum type so that we are not coupled to database values, but just to structure?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea, changed to enums.

-- :name remove-issue! :<! :1
-- :doc removes issue
WITH archived AS (
INSERT INTO archive(type,data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about describing this in clojure? I think that it is better to keep persistence as simple as possible and to have all of the logic in the the application code.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's simple enough to stay in a query, plus one roundtrip less to DB is a good thing.


-- :name contract-from-pool :? :1
-- :doc return first contract from archive for given owner
SELECT a.data->>'transaction_hash' as transaction_hash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not totally convinced about treating an archive to use application logic there apart from archiving. How about modelling the Ethereum contract in a table, even if only with the params that we need right now (I think that transaction_hash and owner_address), and we don't need to archive it, but just need an attribute for status?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@siphiuel Let me understand well the query before elaborating the proposal. Why are we selecting a.data->>'transaction_hash' and then filtering for it to be null with a.data->>'transaction_hash' = i.transaction_hash and i.transaction_hash is null? I might be understanding badly the logic. :S

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use left join condition issues.transaction_hash=null so that to pick transaction hashes that are not in use already. It might happen that a record has been removed from issues table to the archive, and then restored. However, the archive table is append-only, so this will result in the same transaction hash being present both in issues and in the archive, and we want to filter that away.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, got it, thanks!

So my proposal was about starting modelling the Ethereum contract, since it is a model with some data in the end. And doing so only with the attributes and behaviours of the contract that we need for this.

For example, we could have a table ethereum_contract, with attrs id, owner_address, transaction_hash, status. When an issue is archived and has a contract that we can reuse, we can create a contract model there with the status not-used (or maybe a better name). Then when looking for available contracts, we search there, and when taking one, we change the status of it to used. That way we would avoid querying the archive table for application logic, and would express more directly the status of a contract and its filtering.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's handle contract modeling in a separate issue: #267.

(log/debug "Total issues for repo limit reached " repo " " count)
(add-bounty-for-issue repo repo-id issue))))

(defn remove-bounty-for-issue [repo repo-id issue]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably better ending with !

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

:issue_number issue-number
:title issue-title})))

(defn remove
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably better ending with !

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(log/debug "Posting comment to" (str owner "/" repo "/" issue-number) ":" comment)
(issues/create-comment owner repo issue-number comment (self-auth-params))))

(defn remove-deploying-comment
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably better ending with !

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use enum instead of table for data type storage
Copy link
Contributor

@pablodip pablodip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is okay to me.

I think that it would be nice to test it in a clean develop, since it includes database migrations and would be more difficult to revert. @siphiuel What do you think?

@vitvly
Copy link
Author

vitvly commented Feb 15, 2018

Agree.

@martinklepsch
Copy link
Contributor

This seems good but I'm also a little bit worried about introducing a separate table. Wouldn't something like revoked_at as suggested by @pablodip be easier to work with?

Thinking about #284 we'd need to

  • read the JSON from the archive and parse that
  • handle old versions as the table schema might change over time but the JSON blob stays the same

Are there any specific benefits with the separate table approach?

@vitvly
Copy link
Author

vitvly commented Feb 20, 2018

Good point! Initially I've opted for archive simply because it was a less intrusive code change and did not imply changing multiple queries and associated middleware, hence smaller probability of regressions. Smart vocabulary aside, I was plain lazy:)

Still, can think of 2 things in favor of archive approach:

  1. We'd like to have a DB model that works in an append-only fashion. No data can be deleted without a trace, so that we can recover what happened to each issue and when (audit trail capability). Currently the DB is append-only with regards to issues and pull requests - data is always appended. The archive can record multiple deletions of a single issue's bounty, while revoked_at will record the most recent one.
  2. Related to a concern that archive serves a double purpose: @pablodip suggested modeling contracts in a ethereum_contract table, and changing issues to include a relation to ethereum_contract (Ethereum contract model #267). This would relieve archive of non-archival related duties.

Actually, the whole audit-trail related thing is important and requires more thought.

@pablodip
Copy link
Contributor

pablodip commented Feb 20, 2018

To me the reason to not face re-modelling here isn't just about laziness, but about strategy. Introducing new tables or fields requires extra work in some existing things, and so might produce bugs as well. And we already intend to re-model with a different structure. So facing re-modelling here and in the new structure isn't probably needed. Besides, the feature that depends on the archive is an optional feature (re-use a contract) and easily refactored if needed.

@martinklepsch
Copy link
Contributor

Actually, the whole audit-trail related thing is important and requires more thought.

Agree. Event Sourcing may be an interesting approach for this as well and is fundamentally similar to how Datomic works.
Given the complexities around this issue my feeling is that we should probably "ignore it" for now and later on allocate some time to think about how to implement it properly.

@pablodip
Copy link
Contributor

Agree. Event Sourcing may be an interesting approach for this as well and is fundamentally similar to how Datomic works.

I personally don't think they're similar, since Event Sourcing is just about modelling and persisting round domain events, while Datomic is a general-purpose database that adds the time dimension.

@churik
Copy link
Contributor

churik commented Mar 14, 2018

1. GH comment is not posted after deploying contract

Steps:
Requirements: GH account is whitelisted, signed app, test application is added to repo;

Actual result:

GH comment with contract details

Expected result:

no GH comment

Video: http://take.ms/SMVwj
OS: Mac OS High Sierra
Browser: Chrome 64

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Ability to readd bounty (delete and add again)

5 participants