Spaces:
Running
Voice Driven Research With BikeAI
Bike AI now supports voice in and voice out for organizing your research assets.
With any arxiv question you now get:
- Summarized answer to questions with reference paper summaries
- Read aloud Answer, Summary References, and Paper Titles as 3 assets optimized for size and portability
- Media area allowing you to share and listen to other research using crowdsourcing and sharing of knowledge.
Test outputs:
https://github.com/AaronCWacker/Yggdrasil/blob/main/Markdown/Claude3.5Sonnet-Constitutional%20AI.mp3
https://github.com/AaronCWacker/Yggdrasil/blob/main/Markdown/ConsitutionalAI.mp3
https://github.com/AaronCWacker/Yggdrasil/blob/main/Markdown/Evaluating%20LLMs%20on%20the%20GMAT.mp3
https://github.com/AaronCWacker/Yggdrasil/blob/main/Markdown/Evaluating%20References%20-%20AI.mp3
Transcribe links lead to OpenAI. I haven't yet figured out how to be token quota conscious when running with API for GPT 4o and Claude 3.5 Sonnet.
For now they last about the first week until quota is met due to external view services which share subscription api keys for the two AI titans.
I added a Arxiv alone which uses Mistral/Mixtral and Embeddings as a service for AI. I may augment this to add other alternate LLM's based on performance versus cost or at least identifying significantly performant open models where continual pay for tokens doesn't break the scaling pattern. This may long term remove dependency on base models due to economy of scale.