Skip to content
This repository was archived by the owner on Jan 13, 2026. It is now read-only.

Conversation

Copy link
Contributor

Copilot AI commented Jan 9, 2026

Plan: Add Language-Specific Basic Usability Tests

Based on repository exploration, I need to add tests for each supported language:

  • Explore repository structure and understand existing tests
  • Identify supported languages (Python, Go, Rust, TypeScript/JavaScript, Deno)
  • Create test fixture projects for each language
  • Create language-specific test suite (test_language_support.py)
  • Add documentation for test fixtures
  • Address initial code review feedback
  • Address additional code review feedback
    • Remove unused time import
    • Add try/finally blocks to all tests for proper cleanup
    • Fix weak assertions in error handling tests
    • Fix TypeScript version to specific version (5.7.2)
    • Improve server reuse test with better parsing
    • Move BaseLSPTest to conftest.py for reusability
    • Fix README.md grammar
    • Fix JavaScript test cleanup consistency
    • Keep module scope for fixtures_dir (efficiency)
  • Run security check with codeql (No issues found)
  • All changes complete and reviewed
Original prompt

添加一系列测试,确保应用于每种支持语言时的基础可用性。


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

observerw and others added 2 commits January 9, 2026 01:55
Add platform restrictions to prevent installation on Windows due to current implementation limitations. Windows support is planned for the next version.

Changes:
- Add OS classifiers for Linux and macOS in pyproject.toml
- Add fake dependency that fails on Windows
- Add warning callout in README about Windows not being supported
Copilot AI and others added 3 commits January 9, 2026 12:03
Co-authored-by: observerw <20661574+observerw@users.noreply.github.com>
Co-authored-by: observerw <20661574+observerw@users.noreply.github.com>
Co-authored-by: observerw <20661574+observerw@users.noreply.github.com>
@observerw observerw requested a review from Copilot January 9, 2026 12:08
Copilot AI changed the title [WIP] Add tests for basic availability in supported languages Add integration tests for all supported language servers Jan 9, 2026
Copilot AI requested a review from observerw January 9, 2026 12:13
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This WIP PR adds comprehensive language support tests to verify LSP CLI's basic usability across all supported languages (Python, Go, Rust, TypeScript, JavaScript, and Deno). The implementation includes minimal test fixture projects for each language and a new test suite that validates server lifecycle operations (start, list, stop) for each language.

Key Changes:

  • Created BaseLSPTest helper class to eliminate code duplication across test classes
  • Added language-specific test fixtures with proper project structure for each supported language
  • Implemented tests for server lifecycle, multi-language support, and error handling scenarios

Reviewed changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 17 comments.

Show a summary per file
File Description
tests/test_language_support.py New test suite with 8 test methods covering individual language support, multi-server scenarios, server reuse, and error handling
tests/fixtures/go_project/* Minimal Go project fixture with go.mod and simple main package
tests/fixtures/rust_project/* Minimal Rust project fixture with Cargo.toml and src structure
tests/fixtures/typescript_project/* TypeScript project fixture with package.json, tsconfig.json, and sample code
tests/fixtures/javascript_project/* JavaScript ES module project fixture with package.json
tests/fixtures/deno_project/* Deno project fixture with deno.json configuration
tests/fixtures/README.md Documentation explaining fixture purpose, structure, and requirements
tests/fixtures/.gitignore Excludes build artifacts and lock files from version control

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 86 to 102
def test_rust_support(self, fixtures_dir):
"""Test basic LSP operations with Rust project."""
rust_file = fixtures_dir / "rust_project" / "src" / "main.rs"
assert rust_file.exists(), "Rust test file does not exist"

# Start server
result = self.run_lsp_command("server", "start", str(rust_file))
assert result.returncode == 0, f"Failed to start Rust server: {result.stderr}"

# List servers - should show Rust server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
assert "rust" in result.stdout.lower(), "Rust server not listed"

# Stop server
result = self.run_lsp_command("server", "stop", str(rust_file))
assert result.returncode == 0, f"Failed to stop Rust server: {result.stderr}"
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test lacks proper cleanup between start and stop. If any assertion fails after starting the Rust server but before stopping it, the server will remain running and could interfere with subsequent tests. Consider using a try/finally block or a pytest fixture to ensure the server is always stopped.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added try/finally cleanup in 6c32268.

Comment on lines 259 to 262
# Should handle gracefully
result = self.run_lsp_command("server", "start", str(unsupported_file))
# Either fails with non-zero exit code or succeeds with appropriate message
assert isinstance(result.returncode, int)
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assertion is too weak and doesn't verify error handling behavior. The test only checks that returncode is an integer, which would pass even if the command succeeds unexpectedly. Consider asserting that result.returncode is non-zero for unsupported file types, or at minimum checking that no exception was raised during execution.

Suggested change
# Should handle gracefully
result = self.run_lsp_command("server", "start", str(unsupported_file))
# Either fails with non-zero exit code or succeeds with appropriate message
assert isinstance(result.returncode, int)
# Should handle gracefully by returning a non-zero exit code
result = self.run_lsp_command("server", "start", str(unsupported_file))
# Unsupported file types should not start a server successfully
assert result.returncode != 0, (
f"Expected non-zero exit code for unsupported file type, "
f"got {result.returncode}. stdout: {result.stdout!r} stderr: {result.stderr!r}"
)

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c32268 - now asserts non-zero exit code for unsupported file types with detailed error message.

Comment on lines 229 to 232
# Should have the same number of Python servers
python_count1 = list1.stdout.lower().count("python")
python_count2 = list2.stdout.lower().count("python")
assert python_count1 == python_count2, "Server was not reused"
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The string counting approach "python_count1 == python_count2" is fragile and could produce false positives. For example, if output contains "python" in error messages or file paths multiple times, the count might match even if server state changed. Consider parsing the list output more robustly, such as counting actual server entries or using a more structured comparison.

Suggested change
# Should have the same number of Python servers
python_count1 = list1.stdout.lower().count("python")
python_count2 = list2.stdout.lower().count("python")
assert python_count1 == python_count2, "Server was not reused"
# Should have the same number of servers for this specific Python file
python_file_str = str(python_file)
python_servers1 = [
line for line in list1.stdout.splitlines() if python_file_str in line
]
python_servers2 = [
line for line in list2.stdout.splitlines() if python_file_str in line
]
assert len(python_servers1) == len(python_servers2), "Server was not reused"

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Improved in 6c32268 to parse list output and count actual server entries for the specific Python file instead of simple string counting.

import pytest


@pytest.fixture(scope="module")
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Module-scoped fixture might not be appropriate here. Since this fixture just returns a path and has no setup/teardown, there's no performance benefit from module scope. The default function scope would be more conventional and wouldn't change behavior. Consider using function scope or removing the scope parameter entirely to follow pytest conventions.

Suggested change
@pytest.fixture(scope="module")
@pytest.fixture

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After consideration, kept module scope in 27ad38d for efficiency since the fixture returns a static path that doesn't change between tests.

Comment on lines 213 to 237
# Start server first time
result1 = self.run_lsp_command("server", "start", str(python_file))
assert result1.returncode == 0, f"Failed to start server first time: {result1.stderr}"

# Get server list
list1 = self.run_lsp_command("server", "list")
assert list1.returncode == 0

# Start server second time (should reuse)
result2 = self.run_lsp_command("server", "start", str(python_file))
assert result2.returncode == 0, f"Failed to start server second time: {result2.stderr}"

# Get server list again
list2 = self.run_lsp_command("server", "list")
assert list2.returncode == 0

# Should have the same number of Python servers
python_count1 = list1.stdout.lower().count("python")
python_count2 = list2.stdout.lower().count("python")
assert python_count1 == python_count2, "Server was not reused"

# Cleanup
self.run_lsp_command("server", "stop", str(python_file))


Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test lacks proper cleanup. If an assertion fails after starting the Python server, it will remain running and could interfere with subsequent tests. Consider using a try/finally block to ensure the server is always stopped.

Suggested change
# Start server first time
result1 = self.run_lsp_command("server", "start", str(python_file))
assert result1.returncode == 0, f"Failed to start server first time: {result1.stderr}"
# Get server list
list1 = self.run_lsp_command("server", "list")
assert list1.returncode == 0
# Start server second time (should reuse)
result2 = self.run_lsp_command("server", "start", str(python_file))
assert result2.returncode == 0, f"Failed to start server second time: {result2.stderr}"
# Get server list again
list2 = self.run_lsp_command("server", "list")
assert list2.returncode == 0
# Should have the same number of Python servers
python_count1 = list1.stdout.lower().count("python")
python_count2 = list2.stdout.lower().count("python")
assert python_count1 == python_count2, "Server was not reused"
# Cleanup
self.run_lsp_command("server", "stop", str(python_file))
try:
# Start server first time
result1 = self.run_lsp_command("server", "start", str(python_file))
assert result1.returncode == 0, f"Failed to start server first time: {result1.stderr}"
# Get server list
list1 = self.run_lsp_command("server", "list")
assert list1.returncode == 0
# Start server second time (should reuse)
result2 = self.run_lsp_command("server", "start", str(python_file))
assert result2.returncode == 0, f"Failed to start server second time: {result2.stderr}"
# Get server list again
list2 = self.run_lsp_command("server", "list")
assert list2.returncode == 0
# Should have the same number of Python servers
python_count1 = list1.stdout.lower().count("python")
python_count2 = list2.stdout.lower().count("python")
assert python_count1 == python_count2, "Server was not reused"
finally:
# Cleanup
self.run_lsp_command("server", "stop", str(python_file))

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added try/finally cleanup in 6c32268.

Comment on lines 113 to 126
# List servers - should show TypeScript server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
# Note: TypeScript may be identified as "typescript" or abbreviated form
# We check for both to handle different language server implementations
stdout_lower = result.stdout.lower()
assert (
"typescript" in stdout_lower or "tsserver" in stdout_lower
), f"TypeScript server not listed. Output: {result.stdout}"

# Stop server
result = self.run_lsp_command("server", "stop", str(ts_file))
assert result.returncode == 0, f"Failed to stop TypeScript server: {result.stderr}"

Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test lacks proper cleanup between start and stop. If any assertion fails after starting the TypeScript server but before stopping it, the server will remain running and could interfere with subsequent tests. Consider using a try/finally block or a pytest fixture to ensure the server is always stopped.

Suggested change
# List servers - should show TypeScript server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
# Note: TypeScript may be identified as "typescript" or abbreviated form
# We check for both to handle different language server implementations
stdout_lower = result.stdout.lower()
assert (
"typescript" in stdout_lower or "tsserver" in stdout_lower
), f"TypeScript server not listed. Output: {result.stdout}"
# Stop server
result = self.run_lsp_command("server", "stop", str(ts_file))
assert result.returncode == 0, f"Failed to stop TypeScript server: {result.stderr}"
try:
# List servers - should show TypeScript server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
# Note: TypeScript may be identified as "typescript" or abbreviated form
# We check for both to handle different language server implementations
stdout_lower = result.stdout.lower()
assert (
"typescript" in stdout_lower or "tsserver" in stdout_lower
), f"TypeScript server not listed. Output: {result.stdout}"
finally:
# Stop server
result = self.run_lsp_command("server", "stop", str(ts_file))
assert result.returncode == 0, f"Failed to stop TypeScript server: {result.stderr}"

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added try/finally cleanup in 6c32268.

Comment on lines 159 to 168
# List servers - should show Deno server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
assert "deno" in result.stdout.lower(), "Deno server not listed"

# Stop server
result = self.run_lsp_command("server", "stop", str(deno_file))
assert result.returncode == 0, f"Failed to stop Deno server: {result.stderr}"


Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test lacks proper cleanup between start and stop. If any assertion fails after starting the Deno server but before stopping it, the server will remain running and could interfere with subsequent tests. Consider using a try/finally block or a pytest fixture to ensure the server is always stopped.

Suggested change
# List servers - should show Deno server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
assert "deno" in result.stdout.lower(), "Deno server not listed"
# Stop server
result = self.run_lsp_command("server", "stop", str(deno_file))
assert result.returncode == 0, f"Failed to stop Deno server: {result.stderr}"
try:
# List servers - should show Deno server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
assert "deno" in result.stdout.lower(), "Deno server not listed"
finally:
# Always attempt to stop the server to avoid leaking processes
stop_result = self.run_lsp_command("server", "stop", str(deno_file))
assert stop_result.returncode == 0, f"Failed to stop Deno server: {stop_result.stderr}"

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added try/finally cleanup in 6c32268.

- `typescript-language-server` (TypeScript/JavaScript)
- `deno` (Deno)

However, the tests are designed to be resilient and will only fail if the language server cannot be started for a project.
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grammar issue: "However, the tests are designed to be resilient" should be followed by a more specific statement. The sentence implies resilience but then says tests "will only fail if the language server cannot be started," which is not particularly resilient behavior. Consider rephrasing to be more accurate, such as: "The tests will skip or fail gracefully if the required language server is not installed on the system."

Suggested change
However, the tests are designed to be resilient and will only fail if the language server cannot be started for a project.
However, the tests will skip or fail gracefully if the required language server is not installed or cannot be started for a project.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed grammar in 6c32268 - clarified that tests will skip or fail gracefully if language servers are not installed.

Comment on lines 68 to 84
def test_go_support(self, fixtures_dir):
"""Test basic LSP operations with Go project."""
go_file = fixtures_dir / "go_project" / "main.go"
assert go_file.exists(), "Go test file does not exist"

# Start server
result = self.run_lsp_command("server", "start", str(go_file))
assert result.returncode == 0, f"Failed to start Go server: {result.stderr}"

# List servers - should show Go server
result = self.run_lsp_command("server", "list")
assert result.returncode == 0, f"Failed to list servers: {result.stderr}"
assert "go" in result.stdout.lower(), "Go server not listed"

# Stop server
result = self.run_lsp_command("server", "stop", str(go_file))
assert result.returncode == 0, f"Failed to stop Go server: {result.stderr}"
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test lacks proper cleanup between start and stop. If any assertion fails after starting the Go server but before stopping it, the server will remain running and could interfere with subsequent tests. Consider using a try/finally block or a pytest fixture to ensure the server is always stopped.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added try/finally cleanup in 6c32268.

"""

import subprocess
import time
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'time' is not used.

Suggested change
import time

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c32268 - removed unused time import.

@observerw
Copy link
Contributor

@copilot apply changes based on the comments in this thread

…aseLSPTest to conftest

Co-authored-by: observerw <20661574+observerw@users.noreply.github.com>
@observerw observerw marked this pull request as ready for review January 9, 2026 12:37
Co-authored-by: observerw <20661574+observerw@users.noreply.github.com>
@observerw observerw added the enhancement New feature or request label Jan 9, 2026
@observerw observerw merged commit 5c97229 into main Jan 9, 2026
1 check passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants