Conversation
Release bump to 2.0.1 with bug fixes, internal refactors, docs/examples updates, and tests. Changes: - Bumped package version to 2.0.1 and added changelog entry. - Fix truncate behavior: do not append suffix when text already fits within max_len. - Make starts_with_any / ends_with_any grapheme-aware (use public starts_with/ends_with). - Correct center docs to indicate right-biased padding when uneven. - Simplify find_emoji_index API (remove unused start parameter) and tighten its loop. - Replace repeat_str implementation to a direct recursive loop to avoid intermediate list allocation. - Update examples and docs to use the public `str` API (and `str/advanced` where applicable) and to reflect internal module path changes (src/str/internal/*). - Update tokenizer docs to reference public `str.chars` / `str.chars_stdlib` and mark internal tokenizer module. - Add regression tests for truncate noop, grapheme-aware starts/ends_with_any, and center right-bias. Other: various documentation cleanups and example callsite adjustments to match the 2.0 public API.
There was a problem hiding this comment.
Pull request overview
Release bump to 2.0.1 for the str Gleam library, focusing on correctness fixes for grapheme-aware behavior, small internal API cleanups, and aligning docs/examples with the 2.0 public API surface.
Changes:
- Fixed
truncate_with_flagso text that already fits withinmax_lenis returned unchanged (suffix only applied when truncating). - Made
starts_with_any/ends_with_anygrapheme-aware by delegating to the library’s grapheme-awarestarts_with/ends_with. - Updated docs/examples to reference
import strpublic API (and clarified internal module paths), plus added regression tests.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| test/str_core_test.gleam | Adds regression tests for truncate noop, grapheme-aware starts/ends_with_any, and center right-bias. |
| src/str/internal/core.gleam | Fixes truncate condition, makes starts/ends_with_any grapheme-aware, simplifies find_emoji_index, updates center doc, tweaks repeat_str. |
| gleam.toml | Bumps package version to 2.0.1. |
| docs/str_tokenize.md | Updates tokenizer docs to the public str.chars / str.chars_stdlib API and marks internal module path. |
| docs/str_extra.md | Updates module naming/notes and examples to use public str API; updates internal file references. |
| docs/str_core.md | Updates module naming/notes and corrects center right-bias documentation. |
| EXAMPLES.md | Updates examples to the 2.0 public API (str, plus str/advanced where applicable) and internal path references. |
| CHANGELOG.md | Adds 2.0.1 release notes summarizing fixes, changes, tests, and documentation updates. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| fn repeat_str(s: String, n: Int) -> String { | ||
| repeat_str_loop(s, n, "") | ||
| } | ||
|
|
||
| fn repeat_str_loop(s: String, n: Int, acc: String) -> String { | ||
| case n <= 0 { | ||
| True -> "" | ||
| False -> list.fold(list.range(1, n), "", fn(acc, _) { acc <> s }) | ||
| True -> acc | ||
| False -> repeat_str_loop(s, n - 1, acc <> s) | ||
| } |
There was a problem hiding this comment.
repeat_str_loop concatenates via acc <> s on every iteration, which repeatedly copies the growing string (O(n²) in repeats). This was also true of the previous list.fold(..., fn(acc, _) { acc <> s }), but if padding/indent can be large it may be worth switching to a linear-time approach that concatenates once (e.g., build chunks then string.concat).
| /// | ||
| pub fn starts_with_any(text: String, prefixes: List(String)) -> Bool { | ||
| list.any(prefixes, fn(prefix) { string.starts_with(text, prefix) }) | ||
| list.any(prefixes, fn(prefix) { starts_with(text, prefix) }) |
There was a problem hiding this comment.
starts_with_any now delegates to starts_with/2, which recomputes string.to_graphemes(text) for each candidate prefix. For large prefixes lists this does redundant work. Consider precomputing the grapheme list (and length) for text once and comparing against each prefix to keep grapheme-aware semantics without repeated tokenization.
| list.any(prefixes, fn(prefix) { starts_with(text, prefix) }) | |
| let text_graphemes = string.to_graphemes(text) | |
| let text_len = list.length(text_graphemes) | |
| list.any(prefixes, fn(prefix) { | |
| let prefix_graphemes = string.to_graphemes(prefix) | |
| let prefix_len = list.length(prefix_graphemes) | |
| prefix_len <= text_len | |
| && list.take(text_graphemes, prefix_len) == prefix_graphemes | |
| }) |
| /// | ||
| pub fn ends_with_any(text: String, suffixes: List(String)) -> Bool { | ||
| list.any(suffixes, fn(suffix) { string.ends_with(text, suffix) }) | ||
| list.any(suffixes, fn(suffix) { ends_with(text, suffix) }) |
There was a problem hiding this comment.
ends_with_any now delegates to ends_with/2, which recomputes string.to_graphemes(text) for each candidate suffix. If suffixes can be large or called in hot paths, consider factoring a helper that tokenizes text once and performs the suffix checks against that shared grapheme list.
| ### Production Usage | ||
|
|
||
| ### Basic Slug Generation | ||
|
|
There was a problem hiding this comment.
Markdown heading hierarchy looks inconsistent here: ### Production Usage is immediately followed by other ### ... headings (e.g. ### Basic Slug Generation), so those sections become siblings rather than nested under “Production Usage”. Consider keeping “Production Usage” at ## or demoting the subsections to #### so the structure is reflected correctly in rendered docs/ToC.
Release bump to 2.0.1 with bug fixes, internal refactors, docs/examples updates, and tests.
Changes:
strAPI (andstr/advancedwhere applicable) and to reflect internal module path changes (src/str/internal/*).str.chars/str.chars_stdliband mark internal tokenizer module.Other: various documentation cleanups and example callsite adjustments to match the 2.0 public API.