5 (198) · $ 13.50 · In stock
Introducing MPT-30B, a new, more powerful member of our Foundation Series of open-source models, trained with an 8k context length on NVIDIA H100 Tensor Core GPUs.
MPT-30B: Raising the bar for open-source foundation models : r/LocalLLaMA
Raising the Bar Winter 2023 Volume 6 Issue 1 by AccessLex Institute - Issuu
MPT-30B: Raising the bar for open-source foundation models
The List of 11 Most Popular Open Source LLMs of 2023 Lakera – Protecting AI teams that disrupt the world.
MPT-7B-8K 발표: 문서 이해를 위한 8K 문맥 길이 (Announcing MPT-7B-8K: 8K Context Length for Document Understanding) - 읽을거리&정보공유 - 파이토치 한국 사용자 모임
The List of 11 Most Popular Open Source LLMs of 2023 Lakera – Protecting AI teams that disrupt the world.
Matt Shumer on X: The new MPT-30B model by @MosaicML is going to enable a new wave of intelligent apps. - Small enough to deploy cheaply - Super long context length
MosaicML's latest models outperform GPT-3 with just 30B parameters
NeurIPS 2023