feat: Add ASCII grid notation for MIDI patterns#55
feat: Add ASCII grid notation for MIDI patterns#55billy-and-the-oceans wants to merge 3 commits intoahujasid:mainfrom
Conversation
This adds the ability to read existing MIDI notes from clips, which was previously missing from the API. The new function returns: - clip_name: Name of the clip - length: Clip length in beats - note_count: Number of notes - notes: Array of note objects with pitch, start_time, duration, velocity, mute Changes: - AbletonMCP_Remote_Script/__init__.py: Added _get_notes_from_clip method using clip.get_notes_extended() API, and command handler - MCP_Server/server.py: Added @mcp.tool() get_notes_from_clip function - README.md: Updated capabilities and example commands This enables AI assistants to analyze existing clips, visualize patterns, transpose notes, and build on existing musical content. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The newer version of the mcp library changed the parameter name from 'description' to 'instructions' in FastMCP.__init__(). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds human-readable grid notation for reading and writing MIDI clips.
Instead of working with JSON arrays, you can now use ASCII patterns:
KK|o---o---|o---o-o-|
SN|----o---|----o---|
HC|x-x-x-x-|x-x-x-x-|
New tools:
- clip_to_grid: Read a clip and display as ASCII grid
- grid_to_clip: Write ASCII grid notation directly to a clip
- parse_grid_preview: Preview parsed notes without writing
Supports both drum grids (GM drum map) and melodic grids (pitch notation).
Auto-detects track type for appropriate formatting.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughThe PR adds MIDI note extraction and bidirectional ASCII grid notation support to Ableton's remote script and MCP server. A new Changes
Sequence DiagramsequenceDiagram
actor User
participant MCP Server
participant Remote Script
participant Ableton
User->>MCP Server: clip_to_grid(track_idx, clip_idx)
MCP Server->>Remote Script: _get_notes_from_clip(track_idx, clip_idx)
Remote Script->>Ableton: clip.get_notes_extended()
Ableton-->>Remote Script: MIDI notes
Remote Script-->>MCP Server: notes list + metadata
MCP Server->>MCP Server: is_drum_track(notes)
alt Drum Track Detected
MCP Server->>MCP Server: notes_to_drum_grid(notes)
else Melodic Track
MCP Server->>MCP Server: notes_to_melodic_grid(notes)
end
MCP Server-->>User: ASCII grid string
User->>MCP Server: grid_to_clip(track_idx, clip_idx, grid)
MCP Server->>MCP Server: parse_grid(grid)
MCP Server-->>MCP Server: notes list
MCP Server->>Remote Script: add_notes_to_clip(track_idx, clip_idx, notes)
Remote Script->>Ableton: clip.set_notes(notes)
Ableton-->>Remote Script: success
Remote Script-->>MCP Server: result
MCP Server-->>User: confirmation
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @AbletonMCP_Remote_Script/__init__.py:
- Around line 528-571: The _get_notes_from_clip method currently calls
clip.get_notes_extended which exists only for MIDI clips and will raise an
unclear AttributeError for audio clips; add a defensive check using
hasattr(clip, "get_notes_extended") after obtaining clip (and before calling
get_notes_extended) and if it returns False log a clear message (e.g., "Clip is
not a MIDI clip") and raise an appropriate Exception (or IndexError/TypeError
consistent with other checks) so callers get a helpful error rather than an
AttributeError.
In @MCP_Server/grid_notation.py:
- Around line 470-498: parse_grid incorrectly calls parse_melodic_grid
positionally so the numeric steps_per_beat ends up assigned to base_octave;
change the melodic branch to call parse_melodic_grid using a keyword argument
for steps_per_beat (e.g. parse_melodic_grid(grid,
steps_per_beat=steps_per_beat)) or explicitly pass both base_octave and
steps_per_beat by name so steps_per_beat is not misbound; leave the
parse_drum_grid call unchanged.
In @MCP_Server/server.py:
- Around line 727-788: The clear_existing parameter in grid_to_clip is unused
and misleading because the remote _add_notes_to_clip() calls clip.set_notes()
which replaces notes; remove the clear_existing parameter from grid_to_clip
signature and docstring, delete the unused result variable and its assignment,
and update/remove the comment that says "add_notes_to_clip is additive" to
correctly state that set_notes() replaces existing notes; ensure return/message
and behavior remain the same (use ableton.send_command("add_notes_to_clip",
{...}) with notes as before) and update any callers if they passed
clear_existing.
In @README.md:
- Around line 132-178: Add Markdown fenced-code language identifiers for the two
ASCII grid examples in README.md: change the three-backtick fences around the
Drum Grid block that begins with "KK|o---o---" and the Melodic Grid block that
begins with "G4|----o---" from ``` to ```text so both code fences are annotated
(this satisfies markdownlint MD040 and improves rendering).
🧹 Nitpick comments (3)
MCP_Server/server.py (2)
685-725:clip_to_gridlooks good; consider exposing grid resolution as an option.Right now it always uses the default
steps_per_beat=4viagrid_notation.notes_to_grid(notes). If you expect triplets / higher resolution use-cases, a parameter would help—but this is optional.
790-831:parse_grid_previewis handy; make error logs include stack traces.Switching
logger.error(...)tologger.exception(...)inside theexceptwould preserve traceback and speed up debugging of malformed grids.MCP_Server/grid_notation.py (1)
349-427: Melodic rendering quantization truncates; consider rounding for stability.
start_step = int(start_time * steps_per_beat)(and duration) can shift notes earlier due to float representation. Rounding tends to produce more predictable grids when notes were originally quantized.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
AbletonMCP_Remote_Script/__init__.pyMCP_Server/grid_notation.pyMCP_Server/server.pyREADME.md
🧰 Additional context used
🧬 Code graph analysis (1)
MCP_Server/server.py (1)
MCP_Server/grid_notation.py (3)
notes_to_grid(449-467)parse_grid(470-498)is_drum_track(433-446)
🪛 markdownlint-cli2 (0.18.1)
README.md
142-142: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
164-164: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Ruff (0.14.10)
AbletonMCP_Remote_Script/__init__.py
497-497: Abstract raise to an inner function
(TRY301)
497-497: Avoid specifying long messages outside the exception class
(TRY003)
502-502: Abstract raise to an inner function
(TRY301)
502-502: Create your own exception
(TRY002)
502-502: Avoid specifying long messages outside the exception class
(TRY003)
523-523: Consider moving this statement to an else block
(TRY300)
532-532: Abstract raise to an inner function
(TRY301)
532-532: Avoid specifying long messages outside the exception class
(TRY003)
537-537: Abstract raise to an inner function
(TRY301)
537-537: Avoid specifying long messages outside the exception class
(TRY003)
542-542: Abstract raise to an inner function
(TRY301)
542-542: Create your own exception
(TRY002)
542-542: Avoid specifying long messages outside the exception class
(TRY003)
568-568: Consider moving this statement to an else block
(TRY300)
MCP_Server/grid_notation.py
273-273: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
339-339: Loop control variable bar not used within loop body
Rename unused bar to _bar
(B007)
352-352: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
419-419: Loop control variable bar not used within loop body
Rename unused bar to _bar
(B007)
449-449: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
470-470: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
MCP_Server/server.py
349-349: Unused function argument: ctx
(ARG001)
375-375: Unused function argument: ctx
(ARG001)
393-393: Do not catch blind exception: Exception
(BLE001)
394-394: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
394-394: Use explicit conversion flag
Replace with conversion flag
(RUF010)
395-395: Use explicit conversion flag
Replace with conversion flag
(RUF010)
686-686: Unused function argument: ctx
(ARG001)
722-722: Do not catch blind exception: Exception
(BLE001)
723-723: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
723-723: Use explicit conversion flag
Replace with conversion flag
(RUF010)
724-724: Use explicit conversion flag
Replace with conversion flag
(RUF010)
729-729: Unused function argument: ctx
(ARG001)
733-733: Unused function argument: clear_existing
(ARG001)
774-774: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
785-785: Do not catch blind exception: Exception
(BLE001)
786-786: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
786-786: Use explicit conversion flag
Replace with conversion flag
(RUF010)
787-787: Use explicit conversion flag
Replace with conversion flag
(RUF010)
791-791: Unused function argument: ctx
(ARG001)
827-827: Consider moving this statement to an else block
(TRY300)
828-828: Do not catch blind exception: Exception
(BLE001)
829-829: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
829-829: Use explicit conversion flag
Replace with conversion flag
(RUF010)
830-830: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🔇 Additional comments (3)
AbletonMCP_Remote_Script/__init__.py (1)
228-231: Route forget_notes_from_cliplooks correct, but verify Live API thread-safety.This new command runs on the client handler thread (unlike state-modifying commands). If
clip.get_notes_extended(...)requires the main thread in your Ableton version, you’ll see intermittent crashes or “Live API called from wrong thread”-style issues. Consider scheduling this read as well if you observe instability.MCP_Server/server.py (1)
374-396:get_notes_from_cliptool wiring is straightforward.Nice addition for enabling higher-level transforms (grid view, transposition, etc.) without writing to clips.
MCP_Server/grid_notation.py (1)
23-103: Mappings/constants are clear and easy to extend.The label aliases + preferred output labels are a pragmatic approach for “human first” notation.
| def _get_notes_from_clip(self, track_index, clip_index): | ||
| """Get all MIDI notes from a clip""" | ||
| try: | ||
| if track_index < 0 or track_index >= len(self._song.tracks): | ||
| raise IndexError("Track index out of range") | ||
|
|
||
| track = self._song.tracks[track_index] | ||
|
|
||
| if clip_index < 0 or clip_index >= len(track.clip_slots): | ||
| raise IndexError("Clip index out of range") | ||
|
|
||
| clip_slot = track.clip_slots[clip_index] | ||
|
|
||
| if not clip_slot.has_clip: | ||
| raise Exception("No clip in slot") | ||
|
|
||
| clip = clip_slot.clip | ||
|
|
||
| # Get all notes from the clip | ||
| # get_notes_extended(from_pitch, pitch_span, from_time, time_span) | ||
| # Get all pitches (0-128) for the full clip length | ||
| notes_tuple = clip.get_notes_extended(0, 128, 0.0, clip.length) | ||
|
|
||
| # Convert to list of dictionaries | ||
| notes = [] | ||
| for note in notes_tuple: | ||
| notes.append({ | ||
| "pitch": note.pitch, | ||
| "start_time": note.start_time, | ||
| "duration": note.duration, | ||
| "velocity": note.velocity, | ||
| "mute": note.mute | ||
| }) | ||
|
|
||
| result = { | ||
| "clip_name": clip.name, | ||
| "length": clip.length, | ||
| "note_count": len(notes), | ||
| "notes": notes | ||
| } | ||
| return result | ||
| except Exception as e: | ||
| self.log_message("Error getting notes from clip: " + str(e)) | ||
| raise |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, understand the repository structure
find . -type f -name "*.py" | head -20Repository: ahujasid/ableton-mcp
Length of output: 180
🏁 Script executed:
# Check if there's documentation or other references to clip types
rg "MIDI|audio|clip" --type py -B 2 -A 2 | head -100Repository: ahujasid/ableton-mcp
Length of output: 5082
🏁 Script executed:
# Search for other uses of get_notes_extended or similar API calls
rg "get_notes" --type pyRepository: ahujasid/ableton-mcp
Length of output: 878
🏁 Script executed:
# Look for hasattr or capability checks in the codebase
rg "hasattr" --type pyRepository: ahujasid/ableton-mcp
Length of output: 4220
🏁 Script executed:
# Check the full context of the __init__.py file around the method
wc -l AbletonMCP_Remote_Script/__init__.pyRepository: ahujasid/ableton-mcp
Length of output: 105
🏁 Script executed:
# Let's examine the context around _get_notes_from_clip and how clips are created
sed -n '520,580p' AbletonMCP_Remote_Script/__init__.pyRepository: ahujasid/ableton-mcp
Length of output: 2210
🏁 Script executed:
# Check for create_clip, create_audio_track, create_midi_track implementations
rg "def create" AbletonMCP_Remote_Script/__init__.py -A 10 | head -80Repository: ahujasid/ableton-mcp
Length of output: 481
🏁 Script executed:
# Search for any clip type checking patterns
rg "is_midi|is_audio|clip_type" AbletonMCP_Remote_Script/__init__.pyRepository: ahujasid/ableton-mcp
Length of output: 175
🏁 Script executed:
# Check how tracks are differentiated (MIDI vs Audio)
rg "MIDI|Audio" AbletonMCP_Remote_Script/__init__.py | grep -i "track\|clip" | head -20Repository: ahujasid/ableton-mcp
Length of output: 351
🏁 Script executed:
# Let's look at the create_clip method to see if there's a clip type distinction
rg "def _create_clip" AbletonMCP_Remote_Script/__init__.py -A 30Repository: ahujasid/ableton-mcp
Length of output: 1262
🏁 Script executed:
# Check if there's any audio clip handling
rg "audio.*clip|clip.*audio" AbletonMCP_Remote_Script/__init__.py -iRepository: ahujasid/ableton-mcp
Length of output: 46
🏁 Script executed:
# Check the _add_notes_to_clip method to see if it does any type checking
sed -n '480,530p' AbletonMCP_Remote_Script/__init__.pyRepository: ahujasid/ableton-mcp
Length of output: 1795
🏁 Script executed:
# Search for any Ableton Live documentation or comments about clip types
rg "class.*Clip|clip_type|MidiClip|AudioClip" AbletonMCP_Remote_Script/__init__.pyRepository: ahujasid/ableton-mcp
Length of output: 46
🏁 Script executed:
# Let me check if there's any way to determine if a clip is a MIDI clip
# Looking for has_audio, is_midi, or similar attributes
rg "track\\.has|track\\.is|clip\\." AbletonMCP_Remote_Script/__init__.py | grep -E "audio|midi" | head -20Repository: ahujasid/ableton-mcp
Length of output: 175
🏁 Script executed:
# Check if Ableton clips have a type property or similar
rg "get_notes|set_notes" AbletonMCP_Remote_Script/__init__.py -B 3 -A 3Repository: ahujasid/ableton-mcp
Length of output: 1628
🏁 Script executed:
# Search for any documentation or comments about Ableton Live API versions
head -50 AbletonMCP_Remote_Script/__init__.pyRepository: ahujasid/ableton-mcp
Length of output: 1476
🏁 Script executed:
# Let me check if clips can only be created on specific track types
rg "_create_clip|track_type|has_audio_input|has_midi_input" AbletonMCP_Remote_Script/__init__.py -B 2 -A 2 | head -50Repository: ahujasid/ableton-mcp
Length of output: 855
🏁 Script executed:
# Let's check the server.py to see if there's any validation before calling _get_notes_from_clip
rg "get_notes_from_clip" MCP_Server/server.py -B 5 -A 5Repository: ahujasid/ableton-mcp
Length of output: 1187
🏁 Script executed:
# Let me check if there's any way that audio clips could end up in clip_slots
# Look for how clips are loaded or imported
rg "load|import|audio" AbletonMCP_Remote_Script/__init__.py | grep -i "clip" | head -10Repository: ahujasid/ableton-mcp
Length of output: 46
🏁 Script executed:
# Check if clip_slot can contain different types of clips (MIDI vs audio)
rg "clip_slot\|has_clip" AbletonMCP_Remote_Script/__init__.py -B 2 -A 2 | head -40Repository: ahujasid/ableton-mcp
Length of output: 46
🏁 Script executed:
# Search for any documentation or examples about Ableton Live API clip types
web_search
Ableton Live API get_notes_extended method MIDI clip audio clipRepository: ahujasid/ableton-mcp
Length of output: 160
🌐 Web query:
Ableton Live API get_notes_extended MIDI clip audio clip documentation
💡 Result:
get_notes_extended is part of Live’s Clip API (Live Object Model). Key points:
- It’s for MIDI clips only (audio clips have no MIDI notes). (docs.cycling74.com)
- Signature: get_notes_extended(from_pitch, pitch_span, from_time, time_span) — you may also pass a single dict with those keys. Returns a list/dict of note dictionaries. (docs.cycling74.com)
- Each returned note dictionary can include: note_id, pitch, start_time (beats, clip absolute time), duration (beats), velocity, mute, probability, velocity_deviation, release_velocity (and you can request a subset of fields to improve performance). Available since Live 11. (docs.cycling74.com)
- Related newer methods (Live 11+): get_selected_notes_extended, get_all_notes_extended, add_new_notes, apply_note_modifications, remove_notes_extended, get_notes_by_id, remove_notes_by_id. These replace older get_notes / set_notes APIs. (cycling74.com)
If you want the full field list and exact parameter formats, see the Clip section of the Live Object Model (Cycling '74) and Ableton’s Note API docs. (docs.cycling74.com)
Would you like direct links to example code (Max/JS/Python) using get_notes_extended?
Citations:
- 1: https://docs.cycling74.com/apiref/lom/clip/?utm_source=openai
- 2: https://docs.cycling74.com/apiref/lom/clip/?utm_source=openai
- 3: https://docs.cycling74.com/apiref/lom/clip/?utm_source=openai
- 4: https://cycling74.com/articles/what%27s-new-in-live-11-part-2?utm_source=openai
- 5: https://docs.cycling74.com/apiref/lom/clip/?utm_source=openai
🌐 Web query:
Ableton Live Python API clip types has_audio_input has_midi_input
💡 Result:
Short answer: has_audio_input and has_midi_input are boolean Track properties in Live’s API (Live Object Model) that indicate whether a track accepts audio input (true for audio tracks) or MIDI input (true for MIDI tracks). Clips themselves are either audio or MIDI (and ClipSlot.create_clip/create_midi_clip only creates MIDI clips via the API); you cannot programmatically create audio clips through the standard Live Object Model. (docs.cycling74.com)
Sources
- Live Object Model — Track (properties list, includes has_audio_input / has_midi_input). [1]. (docs.cycling74.com)
- StackOverflow / decompiled Live API: ClipSlot.create_clip is for MIDI clips; audio-clip creation is not exposed. [2]. (stackoverflow.com)
Citations:
- 1: https://docs.cycling74.com/apiref/lom/track/?utm_source=openai
- 2: https://docs.cycling74.com/apiref/lom/track/?utm_source=openai
- 3: https://stackoverflow.com/questions/14004580/create-clip-using-ableton-live-python-api?utm_source=openai
🏁 Script executed:
# Check if there's a better way to determine clip type (is_audio_clip, clip_type, etc.)
rg "is_audio|is_midi|clip_type" --type pyRepository: ahujasid/ableton-mcp
Length of output: 249
Add clip type check to prevent confusing errors on audio clips.
clip.get_notes_extended() is a MIDI-only method in the Ableton Live API. If a user loads an audio clip into a slot, this will fail with an unhelpful AttributeError. A simple hasattr(clip, "get_notes_extended") check provides a clearer error path and aligns with the defensive checks used elsewhere in this file.
Proposed fix
def _get_notes_from_clip(self, track_index, clip_index):
"""Get all MIDI notes from a clip"""
try:
@@
clip = clip_slot.clip
+
+ # Fail fast if this isn't a MIDI clip
+ if not hasattr(clip, "get_notes_extended"):
+ raise TypeError("Target clip does not support MIDI note extraction (expected a MIDI clip)")
@@
notes_tuple = clip.get_notes_extended(0, 128, 0.0, clip.length)🧰 Tools
🪛 Ruff (0.14.10)
532-532: Abstract raise to an inner function
(TRY301)
532-532: Avoid specifying long messages outside the exception class
(TRY003)
537-537: Abstract raise to an inner function
(TRY301)
537-537: Avoid specifying long messages outside the exception class
(TRY003)
542-542: Abstract raise to an inner function
(TRY301)
542-542: Create your own exception
(TRY002)
542-542: Avoid specifying long messages outside the exception class
(TRY003)
568-568: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In @AbletonMCP_Remote_Script/__init__.py around lines 528 - 571, The
_get_notes_from_clip method currently calls clip.get_notes_extended which exists
only for MIDI clips and will raise an unclear AttributeError for audio clips;
add a defensive check using hasattr(clip, "get_notes_extended") after obtaining
clip (and before calling get_notes_extended) and if it returns False log a clear
message (e.g., "Clip is not a MIDI clip") and raise an appropriate Exception (or
IndexError/TypeError consistent with other checks) so callers get a helpful
error rather than an AttributeError.
| def notes_to_drum_grid( | ||
| notes: list[dict], | ||
| steps_per_beat: int = 4, | ||
| num_bars: int = None | ||
| ) -> str: | ||
| """ | ||
| Convert MIDI notes to ASCII drum grid. | ||
|
|
||
| Args: | ||
| notes: List of note dicts from Ableton | ||
| steps_per_beat: Grid resolution | ||
| num_bars: Number of bars (auto-detect if None) | ||
|
|
||
| Returns: | ||
| ASCII grid string | ||
| """ | ||
| if not notes: | ||
| return "(empty)" | ||
|
|
||
| # Find clip length | ||
| max_time = max(n.get('start_time', 0) + n.get('duration', 0.25) for n in notes) | ||
| if num_bars is None: | ||
| num_bars = max(1, int((max_time + 3.9) // 4)) | ||
|
|
||
| total_steps = num_bars * 4 * steps_per_beat | ||
|
|
||
| # Group notes by pitch | ||
| pitch_notes = {} | ||
| for note in notes: | ||
| pitch = note.get('pitch', 36) | ||
| if pitch not in pitch_notes: | ||
| pitch_notes[pitch] = [] | ||
| pitch_notes[pitch].append(note) | ||
|
|
||
| # Standard drum order for display | ||
| display_order = ['HC', 'HO', 'RD', 'CR', 'SN', 'CL', 'RM', 'KK', 'HT', 'MT', 'LT', 'FT'] | ||
| pitch_order = [DRUM_LABELS.get(label, 0) for label in display_order] | ||
|
|
||
| # Build grid lines | ||
| lines = [] | ||
|
|
||
| for pitch in sorted(pitch_notes.keys(), key=lambda p: pitch_order.index(p) if p in pitch_order else 99): | ||
| label = PREFERRED_LABELS.get(pitch, f'{pitch:02d}') | ||
|
|
||
| # Initialize row | ||
| row = ['-'] * total_steps | ||
|
|
||
| for note in pitch_notes[pitch]: | ||
| step = int(note.get('start_time', 0) * steps_per_beat) | ||
| if step < total_steps: | ||
| vel = note.get('velocity', 100) | ||
| if vel > 110: | ||
| row[step] = 'O' | ||
| elif vel > 70: | ||
| row[step] = 'o' | ||
| else: | ||
| row[step] = '.' | ||
|
|
||
| # Format with bar separators | ||
| formatted = f"{label}|" | ||
| for i, char in enumerate(row): | ||
| formatted += char | ||
| if (i + 1) % (4 * steps_per_beat) == 0: | ||
| formatted += '|' | ||
|
|
||
| lines.append(formatted) | ||
|
|
||
| # Add beat markers | ||
| beat_line = " |" | ||
| for bar in range(num_bars): | ||
| for beat in range(1, 5): | ||
| beat_line += str(beat) | ||
| beat_line += ' ' * (steps_per_beat - 1) | ||
| beat_line += '|' | ||
| lines.append(beat_line) | ||
|
|
||
| return '\n'.join(lines) | ||
|
|
There was a problem hiding this comment.
Drum rendering doesn’t emit x/X for hi-hats (docs + examples show hats as x).
Right now notes_to_drum_grid renders all drum pitches with O/o/. based on velocity only. That makes hat rows look like any other drum row and won’t match your README examples. Consider pitch-aware symbols for hat pitches (42/44/46).
Illustrative adjustment (pitch-aware hats + rounding)
- step = int(note.get('start_time', 0) * steps_per_beat)
+ step = int(round(note.get('start_time', 0) * steps_per_beat))
if step < total_steps:
vel = note.get('velocity', 100)
- if vel > 110:
- row[step] = 'O'
- elif vel > 70:
- row[step] = 'o'
- else:
- row[step] = '.'
+ if pitch in (42, 44): # closed/pedal hat
+ row[step] = 'X' if vel > 110 else ('x' if vel > 70 else '.')
+ elif pitch == 46: # open hat
+ row[step] = 'X' if vel > 70 else '.'
+ else:
+ row[step] = 'O' if vel > 110 else ('o' if vel > 70 else '.')🧰 Tools
🪛 Ruff (0.14.10)
273-273: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
339-339: Loop control variable bar not used within loop body
Rename unused bar to _bar
(B007)
| def parse_grid(grid: str, is_drums: bool = None, steps_per_beat: int = 4) -> list[dict]: | ||
| """ | ||
| Parse grid notation to notes. | ||
|
|
||
| Args: | ||
| grid: ASCII grid string | ||
| is_drums: Force drum mode (auto-detect if None) | ||
| steps_per_beat: Grid resolution | ||
|
|
||
| Returns: | ||
| List of note dicts | ||
| """ | ||
| # Auto-detect based on labels | ||
| if is_drums is None: | ||
| lines = grid.strip().split('\n') | ||
| for line in lines: | ||
| match = re.match(r'^([A-Z]+)\s*\|', line, re.IGNORECASE) | ||
| if match: | ||
| label = match.group(1).upper() | ||
| if label in DRUM_LABELS: | ||
| is_drums = True | ||
| break | ||
| if is_drums is None: | ||
| is_drums = False | ||
|
|
||
| if is_drums: | ||
| return parse_drum_grid(grid, steps_per_beat) | ||
| else: | ||
| return parse_melodic_grid(grid, steps_per_beat) |
There was a problem hiding this comment.
Critical: parse_grid passes steps_per_beat into base_octave for melodic grids.
parse_melodic_grid(grid: str, base_octave: int = 4, steps_per_beat: int = 4) is called positionally as parse_melodic_grid(grid, steps_per_beat), so a steps_per_beat=4 request accidentally becomes base_octave=4 and steps_per_beat stays default. If steps_per_beat is ever not 4, parsing will be wrong.
Proposed fix
def parse_grid(grid: str, is_drums: bool = None, steps_per_beat: int = 4) -> list[dict]:
@@
if is_drums:
return parse_drum_grid(grid, steps_per_beat)
else:
- return parse_melodic_grid(grid, steps_per_beat)
+ return parse_melodic_grid(grid, steps_per_beat=steps_per_beat)🧰 Tools
🪛 Ruff (0.14.10)
470-470: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
🤖 Prompt for AI Agents
In @MCP_Server/grid_notation.py around lines 470 - 498, parse_grid incorrectly
calls parse_melodic_grid positionally so the numeric steps_per_beat ends up
assigned to base_octave; change the melodic branch to call parse_melodic_grid
using a keyword argument for steps_per_beat (e.g. parse_melodic_grid(grid,
steps_per_beat=steps_per_beat)) or explicitly pass both base_octave and
steps_per_beat by name so steps_per_beat is not misbound; leave the
parse_drum_grid call unchanged.
| @mcp.tool() | ||
| def grid_to_clip( | ||
| ctx: Context, | ||
| track_index: int, | ||
| clip_index: int, | ||
| grid: str, | ||
| clear_existing: bool = True | ||
| ) -> str: | ||
| """ | ||
| Write ASCII grid notation directly to a clip. | ||
|
|
||
| Parameters: | ||
| - track_index: The index of the track containing the clip | ||
| - clip_index: The index of the clip slot | ||
| - grid: ASCII grid notation (drums or melodic) | ||
| - clear_existing: Whether to clear existing notes first (default: True) | ||
|
|
||
| Drum grid format: | ||
| KK|o---o---|o---o-o-| | ||
| SN|----o---|----o---| | ||
| HC|x-x-x-x-|x-x-x-x-| | ||
|
|
||
| Symbols: | ||
| o = normal hit, O = accent, . = ghost note | ||
| x = closed hi-hat, X = open hi-hat | ||
| - = rest | ||
|
|
||
| Melodic grid format: | ||
| G4|----o---|--------| | ||
| E4|--o-----|oooo----| | ||
| C4|o-------|----oooo| | ||
|
|
||
| Each character = 1/16th note. Bar separators (|) are visual only. | ||
| """ | ||
| try: | ||
| ableton = get_ableton_connection() | ||
|
|
||
| # Parse the grid notation | ||
| notes = grid_notation.parse_grid(grid) | ||
|
|
||
| if not notes: | ||
| return "No notes parsed from grid. Check your notation format." | ||
|
|
||
| # If clearing existing, we need to get current notes and remove them | ||
| # For now, just add the notes (Ableton's add_notes_to_clip is additive) | ||
|
|
||
| # Add notes to clip | ||
| result = ableton.send_command("add_notes_to_clip", { | ||
| "track_index": track_index, | ||
| "clip_index": clip_index, | ||
| "notes": notes | ||
| }) | ||
|
|
||
| # Auto-detect what we wrote | ||
| is_drums = grid_notation.is_drum_track(notes) | ||
| type_str = "drum" if is_drums else "melodic" | ||
|
|
||
| return f"Added {len(notes)} {type_str} notes to clip at track {track_index}, slot {clip_index}" | ||
| except Exception as e: | ||
| logger.error(f"Error writing grid to clip: {str(e)}") | ||
| return f"Error writing grid to clip: {str(e)}" | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's confirm the exact content at lines 727-788 in MCP_Server/server.py
sed -n '727,788p' MCP_Server/server.pyRepository: ahujasid/ableton-mcp
Length of output: 1944
🏁 Script executed:
# Check if there are any other usages of clear_existing parameter in the function
rg "clear_existing" MCP_Server/server.py -A 5 -B 5Repository: ahujasid/ableton-mcp
Length of output: 621
🏁 Script executed:
# Search for the remote script that handles add_notes_to_clip
fd -type f -name "*.py" | xargs grep -l "add_notes_to_clip" | head -20Repository: ahujasid/ableton-mcp
Length of output: 234
🏁 Script executed:
# Look for any callers of grid_to_clip
rg "grid_to_clip" --type py -A 3 -B 3Repository: ahujasid/ableton-mcp
Length of output: 303
🏁 Script executed:
# Search for callers of grid_to_clip
rg "grid_to_clip" -B 2 -A 2Repository: ahujasid/ableton-mcp
Length of output: 510
🏁 Script executed:
# Find files that might implement the remote script or command handlers
fd -type f -name "*.py" | xargs grep -l "def add_notes_to_clip\|send_command.*add_notes_to_clip" | head -10Repository: ahujasid/ableton-mcp
Length of output: 234
🏁 Script executed:
# Look for the Ableton connection implementation to understand send_command
rg "get_ableton_connection\|send_command" -B 3 -A 8 | head -100Repository: ahujasid/ableton-mcp
Length of output: 46
🏁 Script executed:
# Let's find Python files properly
find . -name "*.py" | xargs grep -l "add_notes_to_clip" 2>/dev/nullRepository: ahujasid/ableton-mcp
Length of output: 125
🏁 Script executed:
# Search for send_command implementation
rg "def send_command" -A 15Repository: ahujasid/ableton-mcp
Length of output: 1177
🏁 Script executed:
# Look for the Ableton class or connection handler
rg "class.*[Aa]bleton|def get_ableton_connection" -A 10Repository: ahujasid/ableton-mcp
Length of output: 2075
🏁 Script executed:
# Check if there's a remote script or LiveAPI handler
find . -name "*.py" -o -name "*.js" | head -20 | xargs ls -la 2>/dev/null | head -30Repository: ahujasid/ableton-mcp
Length of output: 380
🏁 Script executed:
# Look for add_notes_to_clip implementation in the remote script
rg "add_notes_to_clip" AbletonMCP_Remote_Script/__init__.py -B 5 -A 20Repository: ahujasid/ableton-mcp
Length of output: 4343
🏁 Script executed:
# Also check if there's any clearing logic or note replacement behavior
rg "set_notes|clear.*notes|delete.*notes" AbletonMCP_Remote_Script/__init__.py -B 3 -A 5Repository: ahujasid/ableton-mcp
Length of output: 324
🏁 Script executed:
# Look at the full command handler structure to understand semantics
rg "def handle_command|elif.*add_notes" AbletonMCP_Remote_Script/__init__.py -B 2 -A 15Repository: ahujasid/ableton-mcp
Length of output: 1300
🏁 Script executed:
# Final check: look for any comments or TODOs around clear_existing or note handling
rg "clear|TODO|FIXME" MCP_Server/server.py | grep -i "note\|clear"Repository: ahujasid/ableton-mcp
Length of output: 248
🏁 Script executed:
# Check if set_notes clears or appends - look at any Live documentation or comments
rg "set_notes|clip\.set_notes" AbletonMCP_Remote_Script/__init__.py -B 3 -A 10Repository: ahujasid/ableton-mcp
Length of output: 505
Remove unused clear_existing parameter and fix misleading semantics.
The clear_existing parameter is never used. More critically, the docstring and comment are misleading: the remote script's _add_notes_to_clip() calls clip.set_notes(), which is a replacement operation that always clears existing notes. The current behavior already implements what clear_existing=True promises, making the parameter and its default meaningless. Either remove the parameter entirely (since clearing is unconditional) or rename it to clarify intent if you want to support non-clearing behavior later.
Also remove the unused result variable and fix the comment claiming "add_notes_to_clip is additive"—it replaces notes per Live's set_notes() semantics.
🧰 Tools
🪛 Ruff (0.14.10)
729-729: Unused function argument: ctx
(ARG001)
733-733: Unused function argument: clear_existing
(ARG001)
774-774: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
785-785: Do not catch blind exception: Exception
(BLE001)
786-786: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
786-786: Use explicit conversion flag
Replace with conversion flag
(RUF010)
787-787: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🤖 Prompt for AI Agents
In @MCP_Server/server.py around lines 727 - 788, The clear_existing parameter in
grid_to_clip is unused and misleading because the remote _add_notes_to_clip()
calls clip.set_notes() which replaces notes; remove the clear_existing parameter
from grid_to_clip signature and docstring, delete the unused result variable and
its assignment, and update/remove the comment that says "add_notes_to_clip is
additive" to correctly state that set_notes() replaces existing notes; ensure
return/message and behavior remain the same (use
ableton.send_command("add_notes_to_clip", {...}) with notes as before) and
update any callers if they passed clear_existing.
| - **Read notes from existing MIDI clips** | ||
| - **ASCII Grid Notation** (NEW) - Read and write MIDI using human-readable grid format | ||
| - Change tempo and other session parameters | ||
|
|
||
| ## Grid Notation | ||
|
|
||
| Grid notation lets you read and write MIDI patterns using ASCII art instead of JSON. This is easier to read, visualize, and iterate on. | ||
|
|
||
| ### Drum Grid Format | ||
|
|
||
| ``` | ||
| KK|o---o---|o---o-o-| | ||
| SN|----o---|----o---| | ||
| HC|x-x-x-x-|x-x-x-x-| | ||
| |1 2 3 4 |1 2 3 4 | | ||
| ``` | ||
|
|
||
| **Symbols:** | ||
| - `o` = normal hit (velocity 100) | ||
| - `O` = accent (velocity 127) | ||
| - `.` = ghost note (velocity 50) | ||
| - `x` = closed hi-hat | ||
| - `X` = open hi-hat / accent | ||
| - `-` = rest | ||
|
|
||
| **Drum Labels:** | ||
| - `KK` = Kick, `SN` = Snare, `HC` = Closed Hi-hat, `HO` = Open Hi-hat | ||
| - `CR` = Crash, `RD` = Ride, `LT/MT/HT` = Toms | ||
| - See `grid_notation.py` for full list | ||
|
|
||
| ### Melodic Grid Format | ||
|
|
||
| ``` | ||
| G4|----o---|--------| | ||
| E4|--o-----|oooo----| | ||
| C4|o-------|----oooo| | ||
| |1 2 3 4 |1 2 3 4 | | ||
| ``` | ||
|
|
||
| Each character = 1/16th note. Bar separators (`|`) are visual only. | ||
|
|
||
| ### Grid Notation Tools | ||
|
|
||
| - **`clip_to_grid`** - Read a clip and display as ASCII grid | ||
| - **`grid_to_clip`** - Write ASCII grid notation directly to a clip | ||
| - **`parse_grid_preview`** - Preview what notes a grid would produce (without writing) | ||
|
|
There was a problem hiding this comment.
Add language identifiers to fenced blocks (markdownlint MD040).
This keeps README lint-clean and improves rendering in some viewers.
Proposed fix
-```
+```text
KK|o---o---|o---o-o-|
SN|----o---|----o---|
HC|x-x-x-x-|x-x-x-x-|
|1 2 3 4 |1 2 3 4 |
-```
+```
@@
-```
+```text
G4|----o---|--------|
E4|--o-----|oooo----|
C4|o-------|----oooo|
|1 2 3 4 |1 2 3 4 |
-```
+```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Read notes from existing MIDI clips** | |
| - **ASCII Grid Notation** (NEW) - Read and write MIDI using human-readable grid format | |
| - Change tempo and other session parameters | |
| ## Grid Notation | |
| Grid notation lets you read and write MIDI patterns using ASCII art instead of JSON. This is easier to read, visualize, and iterate on. | |
| ### Drum Grid Format | |
| ``` | |
| KK|o---o---|o---o-o-| | |
| SN|----o---|----o---| | |
| HC|x-x-x-x-|x-x-x-x-| | |
| |1 2 3 4 |1 2 3 4 | | |
| ``` | |
| **Symbols:** | |
| - `o` = normal hit (velocity 100) | |
| - `O` = accent (velocity 127) | |
| - `.` = ghost note (velocity 50) | |
| - `x` = closed hi-hat | |
| - `X` = open hi-hat / accent | |
| - `-` = rest | |
| **Drum Labels:** | |
| - `KK` = Kick, `SN` = Snare, `HC` = Closed Hi-hat, `HO` = Open Hi-hat | |
| - `CR` = Crash, `RD` = Ride, `LT/MT/HT` = Toms | |
| - See `grid_notation.py` for full list | |
| ### Melodic Grid Format | |
| ``` | |
| G4|----o---|--------| | |
| E4|--o-----|oooo----| | |
| C4|o-------|----oooo| | |
| |1 2 3 4 |1 2 3 4 | | |
| ``` | |
| Each character = 1/16th note. Bar separators (`|`) are visual only. | |
| ### Grid Notation Tools | |
| - **`clip_to_grid`** - Read a clip and display as ASCII grid | |
| - **`grid_to_clip`** - Write ASCII grid notation directly to a clip | |
| - **`parse_grid_preview`** - Preview what notes a grid would produce (without writing) | |
| - **Read notes from existing MIDI clips** | |
| - **ASCII Grid Notation** (NEW) - Read and write MIDI using human-readable grid format | |
| - Change tempo and other session parameters | |
| ## Grid Notation | |
| Grid notation lets you read and write MIDI patterns using ASCII art instead of JSON. This is easier to read, visualize, and iterate on. | |
| ### Drum Grid Format | |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
142-142: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
164-164: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @README.md around lines 132 - 178, Add Markdown fenced-code language
identifiers for the two ASCII grid examples in README.md: change the
three-backtick fences around the Drum Grid block that begins with "KK|o---o---"
and the Melodic Grid block that begins with "G4|----o---" from ``` to ```text so
both code fences are annotated (this satisfies markdownlint MD040 and improves
rendering).
User description
Summary
Example
Instead of working with verbose JSON like:
[{"pitch": 36, "start_time": 0, "duration": 0.25, "velocity": 100}, ...]You can now read and write patterns like:
New Tools
clip_to_grid- Read a clip and display as ASCII gridgrid_to_clip- Write ASCII grid notation directly to a clipparse_grid_preview- Preview what notes a grid would produce (without writing to Ableton)Files Changed
MCP_Server/grid_notation.py- New module with notation parsing and generationMCP_Server/server.py- Added three new MCP toolsREADME.md- Documentation for grid notation formatTest plan
clip_to_gridon existing drum clipsclip_to_gridon existing melodic clipsgrid_to_clipwriting drum patternsgrid_to_clipwriting melodic patternsparse_grid_previewwith various inputs🤖 Generated with Claude Code
PR Type
Enhancement, Tests
Description
Add
get_notes_from_clipfunction to read MIDI notes from existing clipsImplement ASCII grid notation module for human-readable MIDI pattern editing
Add three new MCP tools:
clip_to_grid,grid_to_clip,parse_grid_previewSupport both drum patterns (GM drum map) and melodic notation with auto-detection
Update FastMCP initialization parameter from
descriptiontoinstructionsDiagram Walkthrough
File Walkthrough
__init__.py
Add MIDI note reading from clipsAbletonMCP_Remote_Script/init.py
_get_notes_from_clipmethod to read MIDI notes from clips usingget_notes_extended()APIget_notes_from_clipcommand_add_notes_to_clipmethodgrid_notation.py
Implement ASCII grid notation systemMCP_Server/grid_notation.py
velocity symbols (o, O, ., x, X)
tracking
server.py
Add grid notation MCP tools and update FastMCP configMCP_Server/server.py
grid_notationmodule for grid parsing and generationdescriptiontoinstructionsget_notes_from_clipMCP tool to retrieve notes from clipsclip_to_gridMCP tool to display clips as ASCII grid notationgrid_to_clipMCP tool to write grid notation to clipsparse_grid_previewMCP tool to preview parsed notes withoutwriting
README.md
Document grid notation and note reading featuresREADME.md
examples
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.