Skip to content

Conversation

@creeperita09
Copy link

i felt the lack of a logo for this repository and so i made one by modifying the main menu screen so that it says "reinforcement learning version" and the gba image was ripped from the files of pokemon fire red.

added the logo
updated with logo
added gba logo
changed logo to use gba logo as its sideways and it looks better
@PWhiddy
Copy link
Owner

PWhiddy commented Nov 5, 2023

Hi! This is really cool, I love the "reinforcement learning edition" text. I think there's already a lot of images in the readme however, so I'm not sure it's a good place to add these extra large images.

@creeperita09
Copy link
Author

Yeah i agree that there are already a lot of images, maybe i can make a version that can still look good at a small size so you can put It alongside the title kinda like a favicon

@CrizzlyR
Copy link

CrizzlyR commented Nov 7, 2023

Attempting maniacally to provide particular feedback and have enjoyed the navigation of the site no less however, I have been unsuccessful other than within the bounds of commentary.

I have input on the "AI" implications.

I'll make them available here but what can I do to get the notes into collaboration?

Pasting from an attempt>>>

First "pull request" certainly open to feedback towards where suggestions are considered... To begin with:

1 you are simulating the game and it is credit to the game's remarkable functionality

2 you are in construction of a fractal though going off base when interrupting to take over emulator - you severely need to let it play through for a conclusive result requiring much higher processing to isolate bugs. The "conclusive result" is plural and would contain a variety of them if produced with game and not on emulator - in reference to psychology of projected uncontrolled variables. No repeat of captures without damage as original is sustainably random and ROM is copy of a single playthrough or chain of events*

3 the fractal is functional entirely from the control of the parameters of the reinforced (incentivized) learning and taking it over produces 2 incomplete theories.

4 you want much more processing power to run the simulation to the next bug for data analysis. You don't need to observe the button sequencing immediately because it works as a controlled variable. Once the game is played through, then a determination of the button sequence might enhance the outcome's complexity.

*A fact check might be useful on this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants