Skip to content

feat: add modular resource fetcher adapters for Expo and bare React Native#759

Open
rizalibnu wants to merge 38 commits intosoftware-mansion:mainfrom
rizalibnu:feat/resource-fetcher-adapters
Open

feat: add modular resource fetcher adapters for Expo and bare React Native#759
rizalibnu wants to merge 38 commits intosoftware-mansion:mainfrom
rizalibnu:feat/resource-fetcher-adapters

Conversation

@rizalibnu
Copy link

@rizalibnu rizalibnu commented Jan 27, 2026

Description

This PR introduces modular resource fetcher adapters to support both Expo and bare React Native environments, replacing the previous monolithic approach with a flexible, platform-specific architecture.

Key Changes
New Adapter Packages:

  • @rn-executorch/expo-adapter: Resource fetcher for Expo projects using expo-file-system and expo-asset
  • @rn-executorch/bare-adapter: Resource fetcher for bare React Native projects using @dr.pogodin/react-native-fs and @kesha-antonov/react-native-background-downloader

Initialization Changes:

  • Added initExecutorch() function that requires explicit adapter selection
  • Users must now choose and configure the appropriate adapter for their project type
  • Provides better separation of concerns and platform-specific optimizations

Documentation Updates:

  • Created individual README.md files for each adapter package

Introduces a breaking change?

  • Yes
  • No

Migration Required:
Users must now explicitly initialize the library with a resource fetcher adapter:

// Before (no initialization needed)
import { useLLM } from 'react-native-executorch';

// After (required initialization)
import { initExecutorch, useLLM } from 'react-native-executorch';
import { ExpoResourceFetcher } from '@rn-executorch/expo-adapter'; // or BareResourceFetcher

initExecutorch({
  resourceFetcher: ExpoResourceFetcher,
});

Type of change

  • Bug fix (change which fixes an issue)
  • New feature (change which adds functionality)
  • Documentation update (improves or adds clarity to existing documentation)
  • Other (chores, tests, code style improvements etc.)

Tested on

  • iOS
  • Android

Testing instructions

For Expo projects:

  • Install dependencies: yarn add @rn-executorch/expo-adapter expo-file-system expo-asset
  • Initialize: initExecutorch({ resourceFetcher: ExpoResourceFetcher })
  • Run existing LLM example app to verify model downloads work correctly

For bare React Native projects:

  • Install dependencies: yarn add @rn-executorch/bare-adapter @dr.pogodin/react-native-fs @kesha-antonov/react-native-background-downloader
  • Initialize: initExecutorch({ resourceFetcher: BareResourceFetcher })
  • Run the bare React Native example app in PR feat: add bare React Native LLM chat example app #763

Note: A separate PR will add a dedicated bare React Native example app to make this PR easier to review. The Expo example apps can be used to verify the Expo adapter functionality.

Screenshots

Related issues

Closes #549

Checklist

  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have updated the documentation accordingly
  • My changes generate no new warnings

Additional notes

Why This Change:

  • Different React Native environments have different filesystem APIs and c
  • apabilities
  • Expo projects benefit from using Expo's managed filesystem APIs
  • Bare React Native projects can leverage native libraries with background download support
  • Modular architecture allows for better platform-specific optimizations
  • Enables future extensibility for other environments (e.g., React Native Windows, macOS)

Split Into Multiple PRs:
To make review easier, this work has been split:

  • This PR: Core adapter infrastructure and Expo adapter implementation
  • Follow-up PR: Bare React Native example app demonstrating the bare adapter usage

BREAKING CHANGE: initExecutorch() with explicit adapter selection is now required before using any react-native-executorch hooks. Users must install and configure either @rn-executorch/expo-adapter or @rn-executorch/bare-adapter depending on their project type.

rizalibnu and others added 13 commits January 23, 2026 09:07
Add modular resource fetcher adapters to support both Expo and bare React Native environments.

## New Packages

### @rn-executorch/expo-adapter
- Expo-based resource fetcher using expo-file-system
- Supports asset bundles, local files, and remote downloads
- Download management with pause/resume/cancel capabilities

### @rn-executorch/bare-adapter
- Bare React Native resource fetcher using RNFS and background downloader
- Supports all platform-specific file operations
- Background download support with proper lifecycle management

## Core Changes

- Refactor ResourceFetcher to use adapter pattern
- Add initExecutorch() and cleanupExecutorch() for adapter management
- Export adapter interfaces and utilities
- Update LLM controller to support new resource fetching

## App Updates

- Update computer-vision, llm, speech-to-text, text-embeddings apps
- Add adapter initialization to each app
- Update dependencies to use workspace packages
Add a complete bare React Native example app demonstrating LLM integration with react-native-executorch.

## App: llm_bare

### Features
- Simple chat UI for LLM interactions
- Model loading with progress indicator
- Real-time streaming responses
- Send/stop generation controls
- Auto-scrolling message history

### Stack
- **Framework**: React Native 0.81.5 (bare/CLI)
- **LLM**: Uses LLAMA3_2_1B_SPINQUANT model
- **Adapter**: @rn-executorch/bare-adapter
- **Dependencies**: Minimal deps, only essential packages

### Platform Configuration

#### iOS
- Bridging header for RNBackgroundDownloader
- Background URL session handling in AppDelegate
- Background modes (fetch, processing)
- Xcode project configuration

#### Android
- Required permissions for background downloads
- Foreground service configuration
- Network state access
- Proper manifest configuration

### Infrastructure
- Babel configuration for export namespace transform

This serves as a reference implementation for using react-native-executorch in bare React Native environments (non-Expo).
@rizalibnu rizalibnu changed the title feat: add modular resource fetcher adapters for Expo and React Native feat: add modular resource fetcher adapters for Expo and bare React Native Jan 27, 2026
@IgorSwat IgorSwat requested a review from chmjkb January 27, 2026 08:42
@mkopcins mkopcins self-requested a review January 27, 2026 09:02
@msluszniak
Copy link
Member

Lint CI fails (you don't need to worry about the second failing CI ;)). Please could you fix the errors from this CI?

@rizalibnu
Copy link
Author

@msluszniak I’m not able to reproduce the lint CI failure locally — everything passes on my side.

I’ll take a closer look and investigate further to see what might be causing the discrepancy (environment, cache, or config differences). I’ll follow up with a fix or more details as soon as I find the root cause.

Screenshot 2026-01-27 at 17 19 22

@msluszniak
Copy link
Member

@rizalibnu Sure thing, maybe the configuration of the CI itself does not work with as it should. We'll also look at this, don't worry :)

@rizalibnu rizalibnu marked this pull request as draft January 27, 2026 13:22
@rizalibnu rizalibnu marked this pull request as ready for review January 27, 2026 14:06
@rizalibnu
Copy link
Author

@msluszniak Found the issue 👍
CI was failing because react-native-executorch types come from lib/typescript, which isn’t built on a fresh run. It worked locally since I already had the build artifacts.

Fixed by adding a build step before adapter type checks and bumped Node to 22 in .nvmrc due to an ESM-only dependency (arktype via react-native-builder-bob).

@rizalibnu rizalibnu force-pushed the feat/resource-fetcher-adapters branch from 72914c0 to 359427b Compare January 27, 2026 14:57
@msluszniak msluszniak added the feature PRs that implement a new feature label Jan 28, 2026
Add explicit resetAdapter() method to ResourceFetcher class for cleaner API.
- Add resetAdapter() static method that sets adapter to null
- Update cleanupExecutorch() to use resetAdapter() instead of type assertion hack
- Update error message to reference new package names (@react-native-executorch/*)

This provides a cleaner, type-safe way to reset the adapter without
requiring "null as unknown as ResourceFetcherAdapter" type assertion.
@msluszniak
Copy link
Member

msluszniak commented Feb 3, 2026

Also I think that adding these changes will make section Fundamentals in documentation out of date. Could look at these three subsections, and make sure they will be up to date? Also, a docs for ResourceFetcher itself would need an update.

@NorbertKlockiewicz
Copy link
Contributor

Hi, I worked a bit with this PR by building bare react native app. From what I've tested I didn't get a single issue with the bare resource fetcher and the integration was also very smooth. To sum up my experience with it was good and I don't have any issues. Thank you @rizalibnu for this amazing piece of code :D

msluszniak

This comment was marked as resolved.

@msluszniak msluszniak self-assigned this Feb 16, 2026
msluszniak and others added 7 commits February 17, 2026 00:05
…el (software-mansion#734)

## Description

Currently, there is no other way to set configuration in `LLMModule`
other than load model first, and then call `configure` method. This PR
make it possible to configure parameters before loading the actual
model.

### Introduces a breaking change?

- [ ] Yes
- [X] No

### Type of change

- [ ] Bug fix (change which fixes an issue)
- [ ] New feature (change which adds functionality)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
- [X] Other (chores, tests, code style improvements etc.)

### Tested on

- [x] iOS
- [ ] Android

### Testing instructions

Try to run configure on hook returned by `useLLM` and check that
everything works.

For simplicity, I present the example way how to test it inside our
library:
* Create the following file in `apps/llm/app/my_test/index.tsx`:

```typescript
import { useIsFocused } from '@react-navigation/native';
import React, { useEffect, useState, useRef } from 'react';
import {
  View,
  Text,
  TextInput,
  TouchableOpacity,
  FlatList,
  StyleSheet,
  ActivityIndicator,
  KeyboardAvoidingView,
  Platform,
  SafeAreaView,
} from 'react-native';
import { LLMModule, LLAMA3_2_1B_QLORA } from 'react-native-executorch';

// Define message type for UI
type Message = {
  role: 'user' | 'assistant' | 'system';
  content: string;
};

export default function VoiceChatScreenWrapper() {
  const isFocused = useIsFocused();

  return isFocused ? <LlamaChat /> : null;
}


const LlamaChat = () => {
  const [messages, setMessages] = useState<Message[]>([]);
  const [input, setInput] = useState('');
  const [isModelReady, setIsModelReady] = useState(false);
  const [loadingProgress, setLoadingProgress] = useState(0);
  const [isGenerating, setIsGenerating] = useState(false);

  // Use a ref to keep the LLM instance stable across renders
  const llmRef = useRef<LLMModule | null>(null);

  useEffect(() => {
    // 1. Initialize the LLM Module
    llmRef.current = new LLMModule({
      // Update state whenever history changes (covers both user and bot messages)
      messageHistoryCallback: (updatedMessages) => {
        // We cast this to our Message type (assuming the library returns compatible format)
        setMessages(updatedMessages as Message[]);
      },
      // Optional: Use tokenCallback if you want to trigger haptics or very fine-grained updates
      tokenCallback: (token) => {
        // console.log('New token:', token);
      },
    });

    // 2. Load the model
    const loadModel = async () => {
      try {
        await llmRef.current?.load(LLAMA3_2_1B_QLORA, (progress) => {
          setLoadingProgress(progress);
        });
        setIsModelReady(true);
      } catch (error) {
        console.error('Failed to load model:', error);
      }
    };

    loadModel();

    llmRef.current?.configure({chatConfig: {systemPrompt: "You are extremely enthusiastic chat assistant that is ecstatic about chatting with me."}});

    // 3. Cleanup: Delete model from memory when component unmounts
    return () => {
      console.log('Cleaning up LLM...');
      llmRef.current?.delete();
    };
  }, []);

  const handleSend = async () => {
    if (!input.trim() || !isModelReady || isGenerating) return;

    const userText = input;
    setInput(''); // Clear input immediately
    setIsGenerating(true);

    try {
      // sendMessage automatically updates the history via the callback defined in useEffect
      await llmRef.current?.sendMessage(userText);
    } catch (error) {
      console.error('Error generating response:', error);
    } finally {
      setIsGenerating(false);
    }
  };

  const handleStop = () => {
    llmRef.current?.interrupt();
    setIsGenerating(false);
  };

  // --- Render Helpers ---

  if (!isModelReady) {
    return (
      <View style={styles.centerContainer}>
        <ActivityIndicator size="large" color="#007AFF" />
        <Text style={styles.loadingText}>
          Loading Model... {(loadingProgress * 100).toFixed(0)}%
        </Text>
      </View>
    );
  }

  return (
    <SafeAreaView style={styles.container}>
      <KeyboardAvoidingView
        behavior={Platform.OS === 'ios' ? 'padding' : undefined}
        style={styles.keyboardContainer}
      >
        <FlatList
          data={messages}
          keyExtractor={(_, index) => index.toString()}
          contentContainerStyle={styles.listContent}
          renderItem={({ item }) => (
            <View
              style={[
                styles.bubble,
                item.role === 'user' ? styles.userBubble : styles.botBubble,
              ]}
            >
              <Text style={item.role === 'user' ? styles.userText : styles.botText}>
                {item.content}
              </Text>
            </View>
          )}
        />

        <View style={styles.inputContainer}>
          <TextInput
            style={styles.input}
            placeholder="Ask Llama..."
            value={input}
            onChangeText={setInput}
            editable={!isGenerating}
          />
          
          {isGenerating ? (
            <TouchableOpacity onPress={handleStop} style={styles.stopButton}>
              <Text style={styles.buttonText}>Stop</Text>
            </TouchableOpacity>
          ) : (
            <TouchableOpacity onPress={handleSend} style={styles.sendButton}>
              <Text style={styles.buttonText}>Send</Text>
            </TouchableOpacity>
          )}
        </View>
      </KeyboardAvoidingView>
    </SafeAreaView>
  );
};

const styles = StyleSheet.create({
  container: { flex: 1, backgroundColor: '#F5F5F5' },
  centerContainer: { flex: 1, justifyContent: 'center', alignItems: 'center' },
  loadingText: { marginTop: 10, fontSize: 16, color: 'software-mansion#333' },
  keyboardContainer: { flex: 1 },
  listContent: { padding: 16 },
  bubble: {
    maxWidth: '80%',
    padding: 12,
    borderRadius: 16,
    marginBottom: 10,
  },
  userBubble: {
    alignSelf: 'flex-end',
    backgroundColor: '#007AFF',
    borderBottomRightRadius: 2,
  },
  botBubble: {
    alignSelf: 'flex-start',
    backgroundColor: '#E5E5EA',
    borderBottomLeftRadius: 2,
  },
  userText: { color: '#FFF', fontSize: 16 },
  botText: { color: '#000', fontSize: 16 },
  inputContainer: {
    flexDirection: 'row',
    padding: 10,
    borderTopWidth: 1,
    borderColor: '#DDD',
    backgroundColor: '#FFF',
  },
  input: {
    flex: 1,
    backgroundColor: '#F0F0F0',
    borderRadius: 20,
    paddingHorizontal: 16,
    paddingVertical: 10,
    fontSize: 16,
    marginRight: 10,
  },
  sendButton: {
    backgroundColor: '#007AFF',
    justifyContent: 'center',
    alignItems: 'center',
    paddingHorizontal: 20,
    borderRadius: 20,
  },
  stopButton: {
    backgroundColor: '#FF3B30',
    justifyContent: 'center',
    alignItems: 'center',
    paddingHorizontal: 20,
    borderRadius: 20,
  },
  buttonText: { color: '#FFF', fontWeight: '600' },
});
```
* Add the following in `apps/llm/app/_layout.txs`:
```
+        <Drawer.Screen
+          name="my_test/index"
+          options={{
+            drawerLabel: 'Llama Chat',
+            title: 'Llama Chat',
+            headerTitleStyle: { color: ColorPalette.primary },
+          }}
+        />
```

* Add the following in `apps/llm/app/index.tsx`:

```
+
+          <TouchableOpacity
+          style={styles.button}
+          onPress={() => router.navigate('my_test/')}
+        >
+          <Text style={styles.buttonText}>LLama chat</Text>
+        </TouchableOpacity>
```

Run llm app and ask about anything. Generation config should work
correctly, and now responses of the LLM should be super ecstatic. Now,
move this part:

```typescript
    llmRef.current?.configure({chatConfig: {systemPrompt: "You are extremely enthusiastic chat assistant that is ecstatic about chatting with me."}});
```

before loading the model and check if everything works correct.

### Screenshots

<!-- Add screenshots here, if applicable -->

### Related issues

<!-- Link related issues here using #issue-number -->

### Checklist

- [x] I have performed a self-review of my code
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have updated the documentation accordingly
- [x] My changes generate no new warnings

### Additional notes

<!-- Include any additional information, assumptions, or context that
reviewers might need to understand this PR. -->
… param name (software-mansion#801)

## Description

This PR changes the param name of from `resize` to `resizeToInput` in
image segmentation APIs. It also defaults to true now, as the
performance impact is acceptable.

### Introduces a breaking change?

- [x] Yes
- [ ] No

### Type of change

- [ ] Bug fix (change which fixes an issue)
- [ ] New feature (change which adds functionality)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
- [x] Other (chores, tests, code style improvements etc.)

### Tested on

- [ ] iOS
- [ ] Android

### Testing instructions

<!-- Provide step-by-step instructions on how to test your changes.
Include setup details if necessary. -->

### Screenshots

<!-- Add screenshots here, if applicable -->

### Related issues

<!-- Link related issues here using #issue-number -->

### Checklist

- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have updated the documentation accordingly
- [ ] My changes generate no new warnings

### Additional notes

<!-- Include any additional information, assumptions, or context that
reviewers might need to understand this PR. -->

---------

Co-authored-by: Mateusz Sluszniak <56299341+msluszniak@users.noreply.github.com>
## Description

This PR changes binaries to include new tokenizer functionalities.

Added: 
- Wordpiece model and decoder
- Bert and Roberta tokenization is supported 
- Padding and truncation from tokenizer.json is now respected 



### Introduces a breaking change?

- [ ] Yes
- [x] No

### Type of change

- [x] Bug fix (change which fixes an issue)
- [ ] New feature (change which adds functionality)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
- [ ] Other (chores, tests, code style improvements etc.)

### Tested on

- [x] iOS
- [x] Android

### Testing instructions
Run the test suites. 
Run all apps that use tokenizers and verify they load and produce proper
output (LLM, S2T, T2I, Embeddings etc.)

### Checklist

- [x] I have performed a self-review of my code

### Additional notes
Running the tests can yield some issues. Couldn't get to why they happen.
Calling failing functions in example apps yields proper results.
Probably some issue with test environment. We decided to not hold this
PR due to failing TC's and investigate them later on.
<!-- Provide a concise and descriptive summary of the changes
implemented in this PR. -->

- [x] Yes
- [ ] No
This PR introduces breaking change as now the return type from
`transcribe` and `stream` methods are based on `TranscriptionResult`
type. Also now there is no commited / nonCommited properties of hook.
`stream` now is async generator.

- [ ] Bug fix (change which fixes an issue)
- [x] New feature (change which adds functionality)
- [ ] Documentation update (improves or adds clarity to existing
documentation)
- [ ] Other (chores, tests, code style improvements etc.)

- [x] iOS
- [x] Android

* Run demo app in `apps/speech` and run transcription for both time
stamping and regular mode (both from url and from real time audio to
test both `transcribe` and `stream` methods).
* Run voice chat in `apps/llm` to check if transcription appears. *NOTE*
This example seems to be a bit buggy.
* You need to run this on **android device** since this PR also fixes
`Speech to Text` demo app in case of using physical android device.
Earlier, required permissions for microphone weren't granted and the
example effectively didn't work.
* Check that documentation for modified sections is updated and that api
reference is correct as well.
* Run tests and check that they compile and work as previously.

<!-- Add screenshots here, if applicable -->

<!-- Link related issues here using #issue-number -->

- [x] I have performed a self-review of my code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have updated the documentation accordingly
- [x] My changes generate no new warnings

<!-- Include any additional information, assumptions, or context that
reviewers might need to understand this PR. -->
@msluszniak
Copy link
Member

The last thing from my side is to add API Reference to this PR generated from typedoc comments & resolve warning / errors that comes from pre-commit hook.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature PRs that implement a new feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Explore removing expo dependency

6 participants