Skip to content

Bump 2.0.1: fixes, docs & API cleanup#21

Merged
lupodevelop merged 1 commit intomainfrom
fix-and-bug
Feb 28, 2026
Merged

Bump 2.0.1: fixes, docs & API cleanup#21
lupodevelop merged 1 commit intomainfrom
fix-and-bug

Conversation

@lupodevelop
Copy link
Owner

Release bump to 2.0.1 with bug fixes, internal refactors, docs/examples updates, and tests.

Changes:

  • Bumped package version to 2.0.1 and added changelog entry.
  • Fix truncate behavior: do not append suffix when text already fits within max_len.
  • Make starts_with_any / ends_with_any grapheme-aware (use public starts_with/ends_with).
  • Correct center docs to indicate right-biased padding when uneven.
  • Simplify find_emoji_index API (remove unused start parameter) and tighten its loop.
  • Replace repeat_str implementation to a direct recursive loop to avoid intermediate list allocation.
  • Update examples and docs to use the public str API (and str/advanced where applicable) and to reflect internal module path changes (src/str/internal/*).
  • Update tokenizer docs to reference public str.chars / str.chars_stdlib and mark internal tokenizer module.
  • Add regression tests for truncate noop, grapheme-aware starts/ends_with_any, and center right-bias.

Other: various documentation cleanups and example callsite adjustments to match the 2.0 public API.

Release bump to 2.0.1 with bug fixes, internal refactors, docs/examples updates, and tests.

Changes:
- Bumped package version to 2.0.1 and added changelog entry.
- Fix truncate behavior: do not append suffix when text already fits within max_len.
- Make starts_with_any / ends_with_any grapheme-aware (use public starts_with/ends_with).
- Correct center docs to indicate right-biased padding when uneven.
- Simplify find_emoji_index API (remove unused start parameter) and tighten its loop.
- Replace repeat_str implementation to a direct recursive loop to avoid intermediate list allocation.
- Update examples and docs to use the public `str` API (and `str/advanced` where applicable) and to reflect internal module path changes (src/str/internal/*).
- Update tokenizer docs to reference public `str.chars` / `str.chars_stdlib` and mark internal tokenizer module.
- Add regression tests for truncate noop, grapheme-aware starts/ends_with_any, and center right-bias.

Other: various documentation cleanups and example callsite adjustments to match the 2.0 public API.
Copilot AI review requested due to automatic review settings February 28, 2026 08:55
@lupodevelop lupodevelop merged commit 62ef90a into main Feb 28, 2026
3 checks passed
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Release bump to 2.0.1 for the str Gleam library, focusing on correctness fixes for grapheme-aware behavior, small internal API cleanups, and aligning docs/examples with the 2.0 public API surface.

Changes:

  • Fixed truncate_with_flag so text that already fits within max_len is returned unchanged (suffix only applied when truncating).
  • Made starts_with_any / ends_with_any grapheme-aware by delegating to the library’s grapheme-aware starts_with / ends_with.
  • Updated docs/examples to reference import str public API (and clarified internal module paths), plus added regression tests.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
test/str_core_test.gleam Adds regression tests for truncate noop, grapheme-aware starts/ends_with_any, and center right-bias.
src/str/internal/core.gleam Fixes truncate condition, makes starts/ends_with_any grapheme-aware, simplifies find_emoji_index, updates center doc, tweaks repeat_str.
gleam.toml Bumps package version to 2.0.1.
docs/str_tokenize.md Updates tokenizer docs to the public str.chars / str.chars_stdlib API and marks internal module path.
docs/str_extra.md Updates module naming/notes and examples to use public str API; updates internal file references.
docs/str_core.md Updates module naming/notes and corrects center right-bias documentation.
EXAMPLES.md Updates examples to the 2.0 public API (str, plus str/advanced where applicable) and internal path references.
CHANGELOG.md Adds 2.0.1 release notes summarizing fixes, changes, tests, and documentation updates.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 178 to 186
fn repeat_str(s: String, n: Int) -> String {
repeat_str_loop(s, n, "")
}

fn repeat_str_loop(s: String, n: Int, acc: String) -> String {
case n <= 0 {
True -> ""
False -> list.fold(list.range(1, n), "", fn(acc, _) { acc <> s })
True -> acc
False -> repeat_str_loop(s, n - 1, acc <> s)
}
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repeat_str_loop concatenates via acc <> s on every iteration, which repeatedly copies the growing string (O(n²) in repeats). This was also true of the previous list.fold(..., fn(acc, _) { acc <> s }), but if padding/indent can be large it may be worth switching to a linear-time approach that concatenates once (e.g., build chunks then string.concat).

Copilot uses AI. Check for mistakes.
///
pub fn starts_with_any(text: String, prefixes: List(String)) -> Bool {
list.any(prefixes, fn(prefix) { string.starts_with(text, prefix) })
list.any(prefixes, fn(prefix) { starts_with(text, prefix) })
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

starts_with_any now delegates to starts_with/2, which recomputes string.to_graphemes(text) for each candidate prefix. For large prefixes lists this does redundant work. Consider precomputing the grapheme list (and length) for text once and comparing against each prefix to keep grapheme-aware semantics without repeated tokenization.

Suggested change
list.any(prefixes, fn(prefix) { starts_with(text, prefix) })
let text_graphemes = string.to_graphemes(text)
let text_len = list.length(text_graphemes)
list.any(prefixes, fn(prefix) {
let prefix_graphemes = string.to_graphemes(prefix)
let prefix_len = list.length(prefix_graphemes)
prefix_len <= text_len
&& list.take(text_graphemes, prefix_len) == prefix_graphemes
})

Copilot uses AI. Check for mistakes.
///
pub fn ends_with_any(text: String, suffixes: List(String)) -> Bool {
list.any(suffixes, fn(suffix) { string.ends_with(text, suffix) })
list.any(suffixes, fn(suffix) { ends_with(text, suffix) })
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ends_with_any now delegates to ends_with/2, which recomputes string.to_graphemes(text) for each candidate suffix. If suffixes can be large or called in hot paths, consider factoring a helper that tokenizes text once and performs the suffix checks against that shared grapheme list.

Copilot uses AI. Check for mistakes.
Comment on lines +246 to 249
### Production Usage

### Basic Slug Generation

Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Markdown heading hierarchy looks inconsistent here: ### Production Usage is immediately followed by other ### ... headings (e.g. ### Basic Slug Generation), so those sections become siblings rather than nested under “Production Usage”. Consider keeping “Production Usage” at ## or demoting the subsections to #### so the structure is reflected correctly in rendered docs/ToC.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants