- clone the repository
git clone link_to_repo- create a venv and install requirements
cd Phase4
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
- create a new SQL user and grant perms
sudo mysql;
CREATE USER IF NOT EXISTS 'phase4_user'@'localhost' IDENTIFIED BY 'phase4_pass';
GRANT ALL PRIVILEGES ON Silicon_World__db.* TO 'phase4_user'@'localhost';
FLUSH PRIVILEGES;- execute the
reset_and_seed.shscript to load the schema and populate it
chomd +x reset_and_seed.sh
./reset_and_seed.sh
- run the python script to interact and query with the DB
python3 main_app.py- Instructions for authentication once
main_app.pystarts:
- When prompted for username, enter: "phase4_user"
- When prompted for password, enter: "phase4_pass"
- For all the rest, hit enter till login authentication is complete, then follow the below section for role-based usage.
Running python3 main_app.py now starts with a short prompt asking which type of user is operating the CLI. Pick the profile that matches your task; the menu dynamically filters to the operations that role is allowed to run.
- Complex-being (option 1) – full administrative access. Sees every analytics query, derived attribute, daemon update, and table viewer. Use this when you need unrestricted troubleshooting or demos.
- Representative (option 2) – focused on planning upgrades. Can run operations 1‑10 (all core READ/WRITE queries) plus derived inspectors 14‑18. No daemon updates or raw table viewers.
- Researcher (option 3) – read/analysis only. Can execute queries 1‑7, derived inspectors 14‑18, and read-only table viewers 19‑28 to study component metadata without changing state.
- Monitoring Daemon (option 4) – automated health agents. Restricted to the daemon update operations (11‑13) and the metrics they need (derived 14‑17 plus CPU/GPU/Core table viewers). Designed for scripts that push utilisation/throughput counters every few seconds.
If a user tries to invoke an operation outside their role, the CLI blocks the request with a friendly warning.
Three new UPDATE helpers were added specifically for monitoring daemons (and Complex-beings):
Update CPU utilisation– prompts forCPU.component_idand a new utilisation percentage (0‑100), then updatesCPU.utilisation.Update GPU throughput– prompts forGPU.component_idand a positive integer throughput value, then updatesGPU.throughput.Update NIC packet_dropped– prompts forNIC.component_idand a non-negative packet count, then updatesNIC.packet_dropped.
Only the Complex-being and Monitoring Daemon profiles can see these options (menu numbers 11‑13). Representatives/Researchers will never see them and therefore cannot accidentally mutate telemetry data.
For convenience, derived inspectors now sit under menu numbers 14‑18:
- CPU max_threads – multiplies
core_count * threads_per_coreusing liveCoredata. - RAM ram_bandwidth – multiplies
bus_width * transfers_per_clock * speed_MHzto estimate theoretical bandwidth. - CPU efficiency_score –
CPU.utilisation / Component.power_draw_wattsper processor. - GPU efficiency –
GPU.throughput / Component.power_draw_wattsper GPU. - Disk utilisation – sums installed software sizes per disk and reports a grand total.
These are available to every role except the Representative’s table viewers are intentionally hidden to keep their menu concise.
- List AM5 CPUs – selection query that returns every CPU using the AM5 socket, including core counts, clock speeds, and utilisation percentages so upgrade planners can validate board compatibility.
- Show software requirements – projection query over the
Softwaretable that prints disk footprint plus RAM/VRAM needs for each title, helping users verify their build can run a workload. - Highlight fastest RAM – aggregate query that finds the maximum
speed_MHzin the RAM catalog and lists every stick that reaches that ceiling along with size and MT/s figures. - Search benchmark tools – search filter that extracts only
Benchmark-Toolsoftware entries to help QA engineers pick stress apps quickly. - Intel CPUs on ASUS boards – analysis join that surfaces which Intel processors already have validated compatibility records with ASUS motherboards inside the
Are_compatibletable. - Low-power GPUs – energy report that lists GPUs whose
power_draw_wattsis under five, enabling small-form-factor builds or edge deployments to spot efficient accelerators. - Motherboard support for a GPU – parameterised lookup that takes any GPU model number (default NVIDIA 4050 RTX) and returns the socket/form-factor of every board paired with it in compatibility data. For this query, no output was generated which is correct as no motherboard was compatible with the given default GPU.
- Insert CPU for existing manufacturer – guided write workflow that first verifies the manufacturer, inserts a base
Component, and then adds the CPU record with socket/core/clock/utilisation details. We first demonstrated how inserting a cpu with a primary key that was held by another component caused an error, showing the enforecemnt of constraints. Then we demonstrate a valid insert into the CPU table. - Update software disk location – targeted update that changes
Software.location_on_disk, useful when moving a title between drives without re-seeding the DB. - Delete manufacturer with cascade – administrative delete that drops a manufacturer and, via foreign-key cascades, removes every dependent component to keep referential integrity intact.
The first 10 operations were executed by a user of the type 'Complex-being', which is effectively the system admin, and has access to all operations.
Then we demonstrated a different user (daemon) whose only job is to update certain attributes for certain components.
In addition to the above we have also supported utility views, and derived attribute calculations as mentioned in Phase 3's documentation. These are functional, but couldn't be demonstrated because of the 5 minute constraint.