Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
__pycache__/
curpage.html
internet.json
16 changes: 13 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,32 @@
# Dead-Internet
So we all know the classic [Dead Internet Theory](https://en.wikipedia.org/wiki/Dead_Internet_theory), and if you're reading this I assume you at least know what an LLM is. Need I say much more? Yeah of course!

This is a little project I threw together in a couple hours that lets you surf a completely fake web! You run a search query in the only non-generated page `/` and it generates a search results page with fake links that lead to fake websites that lead to more fake websites!
This is a little project I threw together in a couple hours that lets you surf a completely fake web! You run a search query in the only non-generated page `/` and it generates a search results page with fake links that lead to fake websites that lead to more fake websites!
It's not perfect, not by a long shot, but it works well enough for me to spend like an hour just going through it and laughing at what it makes.

If you encounter any issues with the search results page, reload and it'll generate a new page. If you get any issues with the other generated pages then try make slight adjustments to the URL to get a different page, right now there isn't yet a way to regenerate a page.

Also when you navigate to the `/_export` path or kill the server, the JSON of your current internet will be saved to the file `internet.json` in the root of the project. Right now you can't load it back yet but maybe I'll add that in the future if I want, or you could fork it and add it yourself the code isn't very complicated at all.

## How do I run this???
Simple, first install Ollama [here](https://ollama.com/download), then pull your model of choice. The one I used is [Llama 3 8B Instruct](https://ollama.com/library/llama3) which works really well and is very impressive for an 8B model.
Simple, first install Ollama [here](https://ollama.com/download), then pull your model of choice. The one I used is [Llama 3 8B Instruct](https://ollama.com/library/llama3) which works really well and is very impressive for an 8B model. If you don't want to use Ollama you can use any other OpenAI-compatible server by modifying the `client` declaration in ReaperEngine.py to link to your server, I recommend [llama.cpp's server example](https://github.com/ggerganov/llama.cpp/tree/master/examples/server) for something lightweight, or [text-generation-webui](https://github.com/oobabooga/text-generation-webui/) for a fully featured LLM web interface.

Next you'll need to install Python if you don't already have it, I run Python 3.10.12 (came with my Linux Mint install), then the libraries you'll need are:
Due to popular demand and it not being 12am anymore I finally added a requirements.txt file! Now instead of manually installing dependencies you can just run `pip install -r requirements.txt` in the root of the project and it'll install them all for you!

(If you want to manually install dependenies, follow these instructions) Next you'll need to install Python if you don't already have it, I run Python 3.10.12 (came with my Linux Mint install), then the libraries you'll need are:
- [OpenAI](https://pypi.org/project/openai/)
- [BeautifulSoup4](https://pypi.org/project/beautifulsoup4/)
- [Flask](https://pypi.org/project/Flask/)

You can install them by running `pip install -r requirements.txt`

You can modify the API URL and API key in the `.env` file.

Once those are installed, simply run the main.py file and navigate to http://127.0.0.1:5000 or whatever URL Flask gives you and have fun!

## Image Support

Optional image support is implemented using the [SearXNG](https://docs.searxng.org/) search engine. To enable it, set the `ENABLE_IMAGES` environment variable to `true` and provide a URL in the `SEARXNG_URL` environment variable to your SearXNG instance. This does require you to have the JSON format enabled in your `settings.yml`, which is not by default.

## Inspiration
I'll admit it, I'm not the most creative person. I got this idea from [this reddit comment on r/localllama](https://new.reddit.com/r/LocalLLaMA/comments/1c6ejb8/comment/l02eeqx/), so thank you very much commenter!
106 changes: 85 additions & 21 deletions ReaperEngine.py
Original file line number Diff line number Diff line change
@@ -1,52 +1,112 @@
import os
import json
import requests
import random
from openai import OpenAI
from bs4 import BeautifulSoup
from dotenv import load_dotenv


''' About the name...
I apologise for it sounding pretentious or whatever, but I dont care it sounds cool and cyberpunk-y(-ish)
and fits with the Dead Internet Theory theme of this little project
'''

load_dotenv()

class ReaperEngine:
def __init__(self):
self.client = OpenAI(base_url="http://localhost:11434/v1/", api_key="Dead Internet") # Ollama is pretty cool
self.client = OpenAI(base_url=os.getenv("BASE_URL"), api_key=os.getenv("API_KEY")) # Ollama is pretty cool
self.internet_db = dict() # TODO: Exporting this sounds like a good idea, losing all your pages when you kill the script kinda sucks ngl, also loading it is a thing too

self.temperature = 2.1 # Crank up for goofier webpages (but probably less functional javascript)
self.max_tokens = 4096
self.system_prompt = "You are an expert in creating realistic webpages. You do not create sample pages, instead you create webpages that are completely realistic and look as if they really existed on the web. You do not respond with anything but HTML, starting your messages with <!DOCTYPE html> and ending them with </html>. If a requested page is not a HTML document, for example a CSS or Javascript file, write that language instead of writing any HTML. If the requested page is instead an image file or other non-text resource, attempt to generate an appropriate resource for it instead of writing any HTML. You use very little to no images at all in your HTML, CSS or JS."

def _sanitize_links(self, dirty_html):
self.enable_images = bool(os.getenv("ENABLE_IMAGES"))

self.system_prompt = "You are an expert in creating realistic webpages. You do not create sample pages, instead you create webpages that are completely realistic and look as if they really existed on the web. You do not respond with anything but HTML, starting your messages with <!DOCTYPE html> and ending them with </html>. If a requested page is not a HTML document, for example a CSS or Javascript file, write that language instead of writing any HTML."

if self.enable_images:
self.system_prompt += " If the requested page is an image file, with an alt tag. Images should always have an alt tag. Images should always have a width attribute. If the requested page is instead an other non-text resource, attempt to generate an appropriate resource for it instead of writing any HTML."
else:
self.system_prompt += " If the requested page is instead an image file or other non-text resource, attempt to generate an appropriate resource for it instead of writing any HTML. You use very little to no images at all in your HTML, CSS or JS."

def image_search(self, keyword):
# URL of the SearXNG API
url = os.getenv("SEARXNG_URL")

params = {
'q': keyword,
'format': 'json',
'categories': 'images'
}

try:
response = requests.get(url, params=params)
response.raise_for_status()
data = response.json()

if data['results']:
return data['results'][0]['img_src'] # Return the source URL of the first image
else:
return None

except requests.RequestException as e:
print(f"Error fetching image: {e}")
return "https://via.placeholder.com/100"

def _format_page(self, dirty_html):
# Teensy function to replace all links on the page so they link to the root of the server
# Also to get rid of any http(s), this'll help make the link database more consistent

soup = BeautifulSoup(dirty_html, "html.parser")

# Replace any https references to keep the link database consistent
for a in soup.find_all("a"):
print(a["href"])
if "mailto:" in a["href"]:
href = a.get("href", "")
if "mailto:" in href:
continue
a["href"] = a["href"].replace("http://", "")
a["href"] = a["href"].replace("https://", "")
a["href"] = "/" + a["href"]
clean_href = href.replace("http://", "").replace("https://", "")
a["href"] = "/" + clean_href

# Update and adjust image tags
for img in soup.find_all("img"):
if "width" not in img.attrs:
# Assign a random width between 100 and 300px if width is not present
img["width"] = str(random.randint(100, 300))
else:
# Use regular expression to find digits in the width value
width = re.findall(r'\d+', img["width"])[0]
max_width = re.findall(r'\d+', os.getenv("MAX_IMAGE_WIDTH"))[0]

# Convert the extracted strings to integers
if int(width) > int(max_width):
img["width"] = max_width

alt_text = img.get("alt", "")
new_src = self.image_search(alt_text)
img["src"] = new_src

return str(soup)

return str(soup)

def get_index(self):
# Super basic start page, just to get everything going
return "<!DOCTYPE html><html><body><h3>Enter the Dead Internet</h3><form action='/' ><input name='query'> <input type='submit' value='Search'></form></body></html>"

def get_page(self, url, path, query=None):
# Return already generated page if already generated page
try: return self.internet_db[url][path]
except: pass

# Construct the basic prompt
prompt = f"Give me a classic geocities-style webpage from the fictional site of '{url}' at the resource path of '{path}'. Make sure all links generated either link to an external website, or if they link to another resource on the current website have the current url prepended ({url}) to them. For example if a link on the page has the href of 'help' or '/help', it should be replaced with '{url}/path'."
prompt = f"Give me a classic geocities-style webpage from the fictional site of '{url}' at the resource path of '{path}'. Make sure all links generated either link to an external website, or if they link to another resource on the current website have the current url prepended ({url}) to them. For example if a link on the page has the href of 'help' or '/help', it should be replaced with '{url}/path'. All your links must use absolute paths, do not shorten anything. Make the page look nice and unique using internal CSS stylesheets, don't make the pages look boring or generic."
# TODO: I wanna add all other pages to the prompt so the next pages generated resemble them, but since Llama 3 is only 8k context I hesitate to do so

# Add other pages to the prompt if they exist
if url in self.internet_db and len(self.internet_db[url]) > 1:
pass

# Generate the page
generated_page_completion = self.client.chat.completions.create(messages=[
{
Expand All @@ -62,15 +122,19 @@ def get_page(self, url, path, query=None):
max_tokens=self.max_tokens
)

# Add the page to the database
# Get and format the page
generated_page = generated_page_completion.choices[0].message.content
open("curpage.html", "w+").write(generated_page)
generated_page = self._format_page(generated_page)

# Add the page to the database
if not url in self.internet_db:
self.internet_db[url] = dict()
self.internet_db[url][path] = self._sanitize_links(generated_page)
self.internet_db[url][path] = self._format_page(generated_page)

open("curpage.html", "w+").write(generated_page)
return self._sanitize_links(generated_page)
return self._format_page(generated_page)

def get_search(self, query):
# Generates a cool little search page, this differs in literally every search and is not cached so be weary of losing links
search_page_completion = self.client.chat.completions.create(messages=[
Expand All @@ -80,14 +144,14 @@ def get_search(self, query):
},
{
"role": "user",
"content": f"Generate the search results page for a ficticious search engine where the search query is '{query}'. Please include at least 10 results to different ficticious websites that relate to the query. DO NOT link to any real websites, every link should lead to a ficticious website. Feel free to add a bit of CSS to make the page look nice. Each search result will link to its own unique website that has nothing to do with the search engine. Make sure each ficticious website has a unique and somewhat creative URL. Don't mention that the results are ficticious."
"content": f"Generate the search results page for a ficticious search engine where the search query is '{query}'. Please include at least 10 results to different ficticious websites that relate to the query. DO NOT link to any real websites, every link should lead to a ficticious website. Feel free to add a bit of CSS to make the page look nice. Each search result will link to its own unique website that has nothing to do with the search engine and is not a path or webpage on the search engine's site. Make sure each ficticious website has a unique and somewhat creative URL. Don't mention that the results are ficticious."
}],
model="llama3",
temperature=self.temperature,
max_tokens=self.max_tokens
)

return self._sanitize_links(search_page_completion.choices[0].message.content)
return self._format_page(search_page_completion.choices[0].message.content)

def export_internet(self, filename="internet.json"):
json.dump(self.internet_db, open(filename, "w+"))
Expand Down
4 changes: 4 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
flask
openai
beautifulsoup4
python-dotenv