I like the idea of caching AI requests to increase speed in case the output is the same anyway. But I noticed that we actually use a lot of embedding with scorers like answerSimilarity which are currently not supported in combination with wrapAISDKModel. It would be great to support them too!