Conversation
## Overview Fixed critical multi-instance isolation bugs where versions, mod loaders, and instance state were not properly isolated between instances. These changes ensure full data isolation and consistent instance metadata. ## Bug Fixes - P0 (Critical Isolation Issues) ### 1. Backend: get_versions() command isolation - Problem: Used global app_data_dir instead of instance-specific game_dir - Fix: Added instance_id parameter, now queries instance.game_dir - Impact: Versions are now properly isolated per instance ### 2. Frontend: delete_version missing instanceId - Problem: Frontend passed only versionId, not instanceId - Fix: Updated VersionsView.svelte to pass instanceId parameter - Impact: Version deletion now targets correct instance ### 3. Frontend: get_version_metadata missing instanceId - Problem: Metadata queries didn't specify which instance to check - Fix: Updated VersionsView.svelte to pass instanceId parameter - Impact: Version info displayed per-instance correctly ### 4. Frontend: Instance switching doesn't refresh versions - Problem: Switching instances didn't reload version list - Fix: Added $effect hook in GameState to watch activeInstanceId changes - Impact: Version list auto-refreshes on instance switch ## Bug Fixes - P1 (State Synchronization) ### 5. Backend: install_fabric doesn't update Instance.mod_loader - Problem: Instance.mod_loader field wasn't updated after installation - Fix: Added instance_state.update_instance() call - Impact: Instance metadata stays in sync ### 6. Backend: install_forge doesn't update Instance.mod_loader - Problem: Instance.mod_loader field wasn't updated after installation - Fix: Added instance_state.update_instance() call - Impact: Instance metadata stays in sync ### 7. Backend: delete_version doesn't clean up Instance state - Problem: Deleting version didn't clear Instance.version_id or .mod_loader - Fix: Added cleanup logic to clear stale references - Impact: Instance state remains valid after deletion ## Testing - Added comprehensive integration tests in instance_isolation_tests.rs - Tests document 10 key scenarios for multi-instance isolation - All code compiles cleanly with no errors
- Add pass_filenames: false to fmt hook - Add -- separator for cargo fmt args - Manually format code with cargo fmt
- Add migrate_to_shared_caches() with hard link preference - SHA1-based deduplication across all instances - Copy fallback for cross-filesystem scenarios - Auto-enable use_shared_caches after successful migration - UI shows statistics: moved files, hardlinks/copies, MB saved
- Sync instance.version_id after start_game, install_fabric, install_forge - Add jvm_args_override and memory_override to Instance struct - Add file management commands: list_instance_directory, delete_instance_file, open_file_explorer - Support per-instance settings overrides (Java args, memory)
- Create multi-step InstanceCreationModal with version and mod loader selection - Step 1: Instance name input - Step 2: Minecraft version selection with search and filtering - Step 3: Mod loader choice (vanilla/Fabric/Forge) with version selection - Automatically installs vanilla version + mod loader during creation - Wire new modal to InstancesView, replace old simple creation dialog - Remove unused confirmCreate function This wizard integrates version management into instance creation workflow, streamlining the user experience by combining instance setup and version installation into a single guided process.
- Create InstanceEditorModal.svelte with 4 tabs: * Info: Instance name, notes, metadata (created date, last played) * Version: Mod loader switcher and version display * Files: File browser for mods/resourcepacks/shaderpacks/saves/screenshots * Settings: Memory override and JVM arguments customization - Wire InstanceEditorModal to InstancesView with Edit button - Add FileInfo type definition to types/index.ts - Fix accessibility issues: proper button roles, keyboard events - All TypeScript and Svelte compilation errors resolved - Enable comprehensive per-instance configuration management
Refer: http://semifold.noctisynth.org/ ## Summary by Sourcery Adopt Semifold-based release management and prepare the project for a 0.2.0-alpha pre-release across Rust and UI packages. New Features: - Introduce Semifold CI release workflow to orchestrate builds, publishing, and changelog management. - Add Semifold status workflow to report release status on pull requests. - Add workspace-level Node tooling and scripts to manage Tauri version bumping and project preparation. Enhancements: - Refactor GitHub Actions workflows for build, test, check, and lint to simplify pnpm usage, update action versions, and standardize artifact paths. - Move Rust profile configuration into a new Cargo workspace and add a pnpm workspace for the UI package. - Configure Semifold changelog and release metadata for Rust and Node packages, including alpha pre-release tagging. Build: - Replace the tag-triggered GitHub release workflow with a branch-based Semifold CI pipeline that builds artifacts for Linux, macOS, and Windows and uploads them for publishing. - Update CI action versions (checkout, setup-node, upload-artifact, download-artifact) and adjust Rust cache configuration and artifact locations. - Add root workspace package.json, pnpm-workspace.yaml, and pnpm overrides to centralize tooling and dependency management. CI: - Add a dedicated Semifold status GitHub Actions workflow for pull requests to main. - Update existing check, lint, test, and build workflows to align with the new workspace layout and artifact directories. Deployment: - Integrate Semifold configuration for automated publishing to crates.io and npm, including pre-checks, post-version hooks, and use of stored build artifacts in releases. Tests: - Adjust test workflow artifact paths to match the new target directory structure and workspace configuration. Chores: - Remove unused Python packaging files and old Node lockfiles that are no longer part of the build or release process.
- Updated repository URL in Cargo.toml - Updated GitHub API endpoint in main.rs - Updated commit links in HomeView.svelte - Updated issue template links in config.yml Reviewed-by: Claude Sonnet 4.5
Recreated from #64 #64 includes non-linear commits that rebased from old version of main branch which contains large files. ## Summary by Sourcery Update project metadata and links for the new HydroRoll-Team repository and bump versions for the alpha.1 pre-release. New Features: - Add changelog files for the Tauri app and UI packages documenting the v0.2.0-alpha.1 release. Enhancements: - Bump Tauri and UI package versions from 0.2.0-alpha to 0.2.0-alpha.1 to reflect the new pre-release. - Update repository URLs, API endpoints, and commit links to point to the HydroRoll-Team/DropOut GitHub organization. Documentation: - Add release notes for v0.2.0-alpha.1 in separate changelog files for the backend and UI.
…sitive information Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
## Summary by Sourcery Refactor the UI project structure into a pnpm monorepo packages layout and align tooling and automation with the new paths. Enhancements: - Reorganize the UI app from the root ui directory into packages/ui within a pnpm workspace. - Update pnpm workspace configuration to include all packages under packages/*. - Adjust paths in changeset configuration so the @dropout/ui package resolves from packages/ui. Build: - Update pre-commit configuration paths and arguments to reflect the new UI location and normalize hook argument formatting. - Update Dependabot configuration so npm updates target /packages/ui instead of /ui. CI: - Update GitHub Actions workflows to watch packages/** instead of ui/** and to run frontend tasks from packages/ui. - Update pnpm cache dependency paths in workflows to use the root pnpm-lock.yaml. - Simplify frontend install steps in test workflows to run from the repository root. Chores: - Add a new index.html under packages/ui and remove the old ui/index.html to match the new project layout.
…sitive information (#71)
Reviewer's GuideSynchronizes the launcher with main by introducing shared/global caches and feature-flagged launch arguments, enriching per-instance metadata and UI flows, wiring instance-aware version APIs, and migrating CI/release tooling to a Semifold-based monorepo setup under the HydroRoll-Team org. Sequence diagram for new instance creation and mod loader installationsequenceDiagram
actor User
participant InstancesView
participant InstanceCreationModal
participant TauriBackend
participant InstanceState
User->>InstancesView: Click Create Instance
InstancesView->>InstanceCreationModal: Open with isOpen = true
User->>InstanceCreationModal: Step 1 enter instanceName
User->>InstanceCreationModal: Step 2 select Version
User->>InstanceCreationModal: Step 3 choose modLoaderType and loader version
User->>InstanceCreationModal: Click Create Instance
InstanceCreationModal->>TauriBackend: invoke create_instance(name)
TauriBackend->>InstanceState: create_instance(name)
InstanceState-->>TauriBackend: Instance{id, game_dir, ...}
TauriBackend-->>InstanceCreationModal: Instance
InstanceCreationModal->>TauriBackend: invoke install_version(instanceId, versionId)
TauriBackend-->>InstanceCreationModal: Ok
alt modLoaderType == fabric
InstanceCreationModal->>TauriBackend: invoke install_fabric(instanceId, gameVersion, loaderVersion)
TauriBackend-->>InstanceCreationModal: Ok
else modLoaderType == forge
InstanceCreationModal->>TauriBackend: invoke install_forge(instanceId, gameVersion, forgeVersion)
TauriBackend-->>InstanceCreationModal: Ok
else modLoaderType == vanilla
InstanceCreationModal->>TauriBackend: invoke update_instance(instance with version_id)
TauriBackend-->>InstanceCreationModal: Ok
end
InstanceCreationModal->>InstancesView: Close modal, trigger instancesState.loadInstances()
InstancesView-->>User: New instance appears and can be selected
Sequence diagram for shared cache migration and config updatesequenceDiagram
actor User
participant SettingsView
participant TauriBackend
participant InstanceState
participant ConfigState
participant InstanceCore
User->>SettingsView: Click Migrate Now
SettingsView->>TauriBackend: invoke migrate_shared_caches()
TauriBackend->>InstanceState: borrow state
TauriBackend->>InstanceCore: migrate_to_shared_caches(app_handle, instance_state)
loop for each Instance
InstanceCore->>InstanceCore: deduplicate_directory(instance_versions, global_versions)
InstanceCore->>InstanceCore: deduplicate_directory(instance_libraries, global_libraries)
InstanceCore->>InstanceCore: deduplicate_directory(instance_assets, global_assets)
end
InstanceCore-->>TauriBackend: (moved_files, hardlinks, copies, saved_bytes)
TauriBackend->>ConfigState: lock config
TauriBackend->>ConfigState: set use_shared_caches = true
TauriBackend->>ConfigState: save()
ConfigState-->>TauriBackend: Ok
TauriBackend-->>SettingsView: MigrationResult
SettingsView->>SettingsView: settingsState.loadSettings()
SettingsView-->>User: Show success alert with saved MB
Class diagram for updated instance, config, and feature flag modelsclassDiagram
class InstanceState {
-instances: Arc<Mutex<InstancesConfig>>
+create_instance(name: String) Instance
+update_instance(instance: Instance) Result<(), String>
+delete_instance(id: String) Result<(), String>
+duplicate_instance(id: String, new_name: String) Result<Instance, String>
+get_instance(id: String) Option<Instance>
+get_instance_game_dir(id: String) Option<PathBuf>
+list_instances() Vec<Instance>
}
class Instance {
+id: String
+name: String
+game_dir: PathBuf
+created_at: i64
+last_played: Option<i64>
+java_path: Option<String>
+min_memory: Option<u32>
+max_memory: Option<u32>
+version_id: Option<String>
+icon: Option<String>
+notes: Option<String>
+mod_loader: Option<String>
+mod_loader_version: Option<String>
+jvm_args_override: Option<String>
+memory_override: Option<MemoryOverride>
}
class MemoryOverride {
+min: u32
+max: u32
}
class LauncherConfig {
+theme: String
+language: String
+java_path: Option<String>
+min_memory: u32
+max_memory: u32
+window_width: u32
+window_height: u32
+log_upload_service: String
+pastebin_api_key: Option<String>
+assistant: AssistantConfig
+use_shared_caches: bool
+keep_legacy_per_instance_storage: bool
+feature_flags: FeatureFlags
}
class FeatureFlags {
+demo_user: bool
+quick_play_enabled: bool
+quick_play_path: Option<String>
+quick_play_singleplayer: bool
+quick_play_multiplayer_server: Option<String>
}
class ConfigState {
+config: Mutex<LauncherConfig>
+save() Result<(), String>
}
class Rules {
+is_library_allowed(rules: Option<Vec<Rule>>, features: Option<FeatureFlags>) bool
}
class GameLaunchContext {
+feature_flags: FeatureFlags
+use_shared_caches: bool
+instance: Instance
}
InstanceState --> Instance
Instance --> MemoryOverride
ConfigState --> LauncherConfig
LauncherConfig --> FeatureFlags
GameLaunchContext --> FeatureFlags
GameLaunchContext --> Instance
Rules --> FeatureFlags
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Pull request overview
This PR syncs the branch with main and prepares for the v0.2.0-alpha.1 pre-release. The changes involve:
- Version bump from 0.1.26 to 0.2.0-alpha.1
- Repository transfer from HsiangNianian to HydroRoll-Team
- Major restructuring to a pnpm workspace monorepo
- Removal of Python-based tooling (pyproject.toml, uv.lock, _version.py)
- Addition of shared cache system for instances
- Enhanced native library selection (architecture-specific natives)
- Feature flags system for launcher arguments
- Frontend reorganization into packages/ui structure
Changes:
- Major backend refactoring with new instance management, shared caches, and file browser APIs
- Frontend migrated to pnpm workspace with new store implementations
- Build system updates for monorepo structure
- Breaking changes to command signatures (added instanceId parameters)
Reviewed changes
Copilot reviewed 40 out of 77 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| src-tauri/Cargo.toml | Version bump to 0.2.0-alpha.1, repository URL update, removed optimization profiles |
| src-tauri/tauri.conf.json | Version bump, updated build commands for pnpm workspace, frontend path change |
| src-tauri/src/main.rs | Major refactor: shared caches, native library arch selection, instance file management, feature flags integration |
| src-tauri/src/core/rules.rs | Feature flags support in rule matching, improved architecture compatibility |
| src-tauri/src/core/instance.rs | Shared cache migration functionality, instance metadata extensions |
| src-tauri/src/core/config.rs | Feature flags and storage management settings |
| packages/ui/* | Complete frontend restructure with new components, stores, and effects |
| pnpm-workspace.yaml | New monorepo workspace configuration |
| Removed files | Python tooling (pyproject.toml, uv.lock, _version.py), old UI files |
Files not reviewed (2)
- pnpm-lock.yaml: Language not supported
- ui/package-lock.json: Language not supported
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
migrate_shared_caches, theconfig_state.configmutex is locked and cloned multiple times and then mutated in a confusing way; consider taking a single mutable lock, updatinguse_shared_cachesonce, and saving, to avoid redundant clones and potential race-prone patterns. - The deduplication logic in
migrate_to_shared_caches/deduplicate_directorybuilds an initialdest_hashesmap but never updates it when new files are moved into the destination, so later files with the same content won't be detected as duplicates; updating the map after each successful move/copy would preserve deduplication across the entire run. - The new Tauri commands
delete_instance_fileandopen_file_exploreraccept arbitrary file paths from the frontend without validating they belong to the instance's game directory, which could allow unintended deletion or opening of files outside the game data; consider restricting these operations to paths under known instance directories.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `migrate_shared_caches`, the `config_state.config` mutex is locked and cloned multiple times and then mutated in a confusing way; consider taking a single mutable lock, updating `use_shared_caches` once, and saving, to avoid redundant clones and potential race-prone patterns.
- The deduplication logic in `migrate_to_shared_caches` / `deduplicate_directory` builds an initial `dest_hashes` map but never updates it when new files are moved into the destination, so later files with the same content won't be detected as duplicates; updating the map after each successful move/copy would preserve deduplication across the entire run.
- The new Tauri commands `delete_instance_file` and `open_file_explorer` accept arbitrary file paths from the frontend without validating they belong to the instance's game directory, which could allow unintended deletion or opening of files outside the game data; consider restricting these operations to paths under known instance directories.
## Individual Comments
### Comment 1
<location> `src-tauri/src/main.rs:2417-2422` </location>
<code_context>
+ );
+
+ // Automatically enable shared caches config
+ let mut config = config_state.config.lock().unwrap().clone();
+ config.use_shared_caches = true;
+ drop(config);
+ *config_state.config.lock().unwrap() = config_state.config.lock().unwrap().clone();
+ config_state.config.lock().unwrap().use_shared_caches = true;
+ config_state.save()?;
+
+ Ok(MigrationResult {
</code_context>
<issue_to_address>
**issue:** Config mutation during migration is redundant and likely not doing what is intended.
This code clones the config, mutates `use_shared_caches`, then drops the clone without writing it back, and immediately re-locks the mutex multiple times, cloning and mutating again. The first mutation is discarded, and the mutex is locked three times in succession, which is both confusing and unnecessary.
You can simplify this to a single lock and mutation:
```rust
{
let mut guard = config_state.config.lock().unwrap();
guard.use_shared_caches = true;
}
config_state.save()?;
```
or, if you prefer to keep the guard alive through `save`:
```rust
let mut guard = config_state.config.lock().unwrap();
guard.use_shared_caches = true;
config_state.save()?;
```
</issue_to_address>
### Comment 2
<location> `packages/ui/src/components/VersionsView.svelte:220-222` </location>
<code_context>
try {
- await invoke("delete_version", { versionId: versionToDelete });
+ await invoke("delete_version", {
+ instanceId: instancesState.activeInstanceId,
+ versionId: versionToDelete
+ });
// Clear selection if deleted version was selected
</code_context>
<issue_to_address>
**issue (bug_risk):** `delete_version` invoke likely uses mismatched argument names compared to the Rust command.
The Rust command expects `instance_id: String` and `version_id: String`, but this invoke sends `instanceId` and `versionId`, which Tauri will not bind correctly unless the backend was changed accordingly. This should instead be:
```svelte
await invoke("delete_version", {
instance_id: instancesState.activeInstanceId,
version_id: versionToDelete,
});
```
Please also double-check other invokes in this PR for consistent snake_case argument names matching the Rust side.
</issue_to_address>
### Comment 3
<location> `src-tauri/src/core/instance.rs:380` </location>
<code_context>
+ app_handle: &AppHandle,
+ instance_state: &InstanceState,
+) -> Result<(usize, usize, usize, u64), String> {
+ let app_dir = app_handle.path().app_data_dir().unwrap();
+
+ // Global shared cache directories
</code_context>
<issue_to_address>
**issue (bug_risk):** `unwrap()` on `app_data_dir()` in migration can crash the app if the path lookup fails.
Here the migration is doing:
```rust
let app_dir = app_handle.path().app_data_dir().unwrap();
```
If `app_data_dir()` fails (e.g., due to OS/env issues), this will panic inside the Tauri command and crash the app instead of returning a handled error, unlike other call sites.
Consider propagating the error instead:
```rust
let app_dir = app_handle
.path()
.app_data_dir()
.map_err(|e| format!("Failed to get app data dir: {e}"))?;
```
so `migrate_to_shared_caches` can return a regular error to the frontend rather than panicking.
</issue_to_address>
### Comment 4
<location> `src-tauri/src/core/instance.rs:360` </location>
<code_context>
+
+/// Delete a file in an instance directory
+#[tauri::command]
+async fn delete_instance_file(path: String) -> Result<(), String> {
+ let path_buf = std::path::PathBuf::from(&path);
+ if path_buf.is_dir() {
+ tokio::fs::remove_dir_all(&path_buf)
+ .await
+ .map_err(|e| e.to_string())?;
+ } else {
+ tokio::fs::remove_file(&path_buf)
+ .await
+ .map_err(|e| e.to_string())?;
+ }
+ Ok(())
+}
+
</code_context>
<issue_to_address>
**🚨 suggestion (security):** Allowing arbitrary paths in `delete_instance_file` and `open_file_explorer` may be broader than intended.
These commands currently accept a `String` path from the frontend and use it directly for filesystem operations:
```rust
let path_buf = std::path::PathBuf::from(&path);
// remove_dir_all / remove_file
```
and
```rust
Command::new("explorer" | "open" | "xdg-open").arg(&path).spawn()
```
If the frontend is compromised or buggy, it could invoke them on paths outside the instance’s game directory. To restrict them to instance content, validate that the path resolves under an allowed instance directory (e.g. `canonicalize` + `starts_with`) before performing the operation.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| let mut config = config_state.config.lock().unwrap().clone(); | ||
| config.use_shared_caches = true; | ||
| drop(config); | ||
| *config_state.config.lock().unwrap() = config_state.config.lock().unwrap().clone(); | ||
| config_state.config.lock().unwrap().use_shared_caches = true; | ||
| config_state.save()?; |
There was a problem hiding this comment.
issue: Config mutation during migration is redundant and likely not doing what is intended.
This code clones the config, mutates use_shared_caches, then drops the clone without writing it back, and immediately re-locks the mutex multiple times, cloning and mutating again. The first mutation is discarded, and the mutex is locked three times in succession, which is both confusing and unnecessary.
You can simplify this to a single lock and mutation:
{
let mut guard = config_state.config.lock().unwrap();
guard.use_shared_caches = true;
}
config_state.save()?;or, if you prefer to keep the guard alive through save:
let mut guard = config_state.config.lock().unwrap();
guard.use_shared_caches = true;
config_state.save()?;| await invoke("delete_version", { | ||
| instanceId: instancesState.activeInstanceId, | ||
| versionId: versionToDelete |
There was a problem hiding this comment.
issue (bug_risk): delete_version invoke likely uses mismatched argument names compared to the Rust command.
The Rust command expects instance_id: String and version_id: String, but this invoke sends instanceId and versionId, which Tauri will not bind correctly unless the backend was changed accordingly. This should instead be:
await invoke("delete_version", {
instance_id: instancesState.activeInstanceId,
version_id: versionToDelete,
});Please also double-check other invokes in this PR for consistent snake_case argument names matching the Rust side.
| app_handle: &AppHandle, | ||
| instance_state: &InstanceState, | ||
| ) -> Result<(usize, usize, usize, u64), String> { | ||
| let app_dir = app_handle.path().app_data_dir().unwrap(); |
There was a problem hiding this comment.
issue (bug_risk): unwrap() on app_data_dir() in migration can crash the app if the path lookup fails.
Here the migration is doing:
let app_dir = app_handle.path().app_data_dir().unwrap();If app_data_dir() fails (e.g., due to OS/env issues), this will panic inside the Tauri command and crash the app instead of returning a handled error, unlike other call sites.
Consider propagating the error instead:
let app_dir = app_handle
.path()
.app_data_dir()
.map_err(|e| format!("Failed to get app data dir: {e}"))?;so migrate_to_shared_caches can return a regular error to the frontend rather than panicking.
| @@ -344,3 +359,226 @@ pub fn migrate_legacy_data( | |||
|
|
|||
| Ok(()) | |||
There was a problem hiding this comment.
🚨 suggestion (security): Allowing arbitrary paths in delete_instance_file and open_file_explorer may be broader than intended.
These commands currently accept a String path from the frontend and use it directly for filesystem operations:
let path_buf = std::path::PathBuf::from(&path);
// remove_dir_all / remove_fileand
Command::new("explorer" | "open" | "xdg-open").arg(&path).spawn()If the frontend is compromised or buggy, it could invoke them on paths outside the instance’s game directory. To restrict them to instance content, validate that the path resolves under an allowed instance directory (e.g. canonicalize + starts_with) before performing the operation.
Summary by Sourcery
Introduce shared global caches, instance-specific overrides, and improved CI/release tooling while restructuring the UI into a workspace-based monorepo.
New Features:
Bug Fixes:
Enhancements:
Build:
CI:
Documentation:
Chores: