AI Infrastructure as a Service

Osiris is built to remove the barrier between AI innovation and real-world application. We want researchers and developers who come up with novel algorithms, techniques, and models to see Osiris as the best way to deploy their technology, find customers, and earn the financial and reputation rewards they deserve.

This mission requires us to handle the deployment and management aspects of AI services so AI developers can focus on what they do best and AI customers can be confident that the services they want will have high uptime, robustness, and performance and will be deployed and managed in secure, scalable environments. We will provide AI infrastructure as a service for a fee or share of revenues, similar to how app stores have simplified the mobile app economy for users and developers.

Our “AI-infrastructure-as-a-service” tools will play the role of similar tools by platforms such as AWS and Azure, but with the following design goals tailored to the needs of networked AI:

● Optimize for the computational requirements of training and deploying machine learning models. This goes beyond deep neural networks and GPU usage and considers graph processing, multi-agent systems, dynamic distributed knowledge stores, and other processing models needed to allow the emergence of networked AGI.

● Support scalable processing of stateful services, which is a challenge in current cloud platforms but necessary for many tasks such as those of conversational agents, task-oriented augmented reality, personal assistants, and others.

● Include secure support for public, private, and hybrid cloud deployments (public–private mix and edge–cloud mix).

● Dynamically optimize compute locations to maximize compute and data proximity, improving performance and reducing bandwidth costs.

We will leverage critical open-source technologies such as Kubernetes and OpenStack and support deployment of our infrastructure-as-a-service (IaaS) solution both on top of existing cloud platforms (where we make optimal use of built-in tooling) and bare metal data centers. One key consideration is using cryptocurrency mining hardware to train AI models and long-running AI reasoning and inference tasks.

Last updated