Merged
Conversation
…he correct storage
Contributor
|
Some things I found for examples that use metal:
|
Member
Author
Oh woops this was not supposed to be included in this PR. I was testing using rayon inside model code to launch several pieces simultaneously. Cut the initial loading time in half so we should explore, but not in this PR :) Just for completeness the fix is simply to ensure the current thread actually has a command buffer pub fn command_buffer(&mut self) -> Result<(bool, CommandBuffer), MetalKernelError> {
let mut command_buffers = self.command_buffers.lock()?;
let command_buffer = match command_buffers.get_mut(&thread::current().id()) {
Some(command_buffer) => command_buffer,
None => {
let command_buffer = create_command_buffer(&self.command_queue)?;
command_buffers.insert(thread::current().id(), command_buffer);
command_buffers.get_mut(&thread::current().id()).unwrap()
}
};
... |
* got depth anything v2 example working * fixed tensor::try_from -> tensor::new * Depth anything v2 example: init device generically. Use anyhow result --------- Co-authored-by: Ivar Flakstad <69173633+ivarflakstad@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Makes tensor backend generic. Will allow us to support any number of backends. With a few tweaks to the repo the backend can even be defined outside of the main repo - as opposed to the existing enum approach.
This PR introduces the traits
BackendStorage,BackendDevice, andQuantizedBackendfor quantization support.A candle tensor now has the following definition
Where previously the backend storage was provided by this enum
Now we instead implement
BackendStoragefor the three variants in the enum, and any new backends we would like to support in the future.The original
Storageis kept around and also implementsBackendStorageand can be used just like before. All you have to do is specify your tensor asTensor<Storage>and all code that depends on the inner enum etc will work as before. If you want to try transitioning to the new scheme try the backend of your choice likeTensor<CudaStorage>.Original
Storageis kept for now becauseIn my experience many people who use candle do it partly because they would like to not use python, so I'm not sure how much traction
candle-pyo3has. If it is not valuable to the community it would probably be better for the project as a whole to deprecate it. Deprecating the oldStoragewould be a logical next step, as we push users to use the new approach to backends.