OpenAI Icon

OpenAI

A non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.
openai.com • 4 Stories
All Sources

OpenAI Icon OpenAI

Microsoft is investing $1 billion in OpenAI

Straight from the horse’s mouth: We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI. We’ll jointly develop new Azure AI supercomputing technologies, and Microsoft will become our exclusive cloud provider—so we’ll be working hard together to further extend Microsoft Azure’s capabilities in large-scale AI systems. Sometimes it’s hard to see the value traded in large scale investments like these. What do both sides get? With this particular investment, however, it’s pretty obvious what Microsoft is getting (Azure++) and what OpenAI is getting (an expanded R&D budget). It’s also worth noting that this is specifically focused on Artificial General Intelligence, not merely advancing the current state of the art in Machine Learning.

read more

OpenAI Icon OpenAI

OpenAI creates a "capped-profit" to help build artificial general intelligence

OpenAI, one of the largest and most influential AI research entities, was originally a non-profit. However, they just announced that they are creating a “capped-profit” entity, OpenAI LP. This capped-profit entity will supposedly help them accomplish their mission of building artificial general intelligence (AGI): We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company. The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity. To some this makes total sense. Others have criticized the move, because they say that it misrepresents money as the only barrier to AGI or implies that OpenAI will develop it in a vacuum. What do you think? Learn more about OpenAI’s mission from one of it’s founders in this episode of Practical AI.

read more

OpenAI Icon OpenAI

OpenAI Fellows — Fall 2018 (now open)

As we gear up for the launch of Practical AI and more AI/ML/DS related news coverage, I wanted to bring to your attention to this 6-month apprenticeship (compensated) in AI research at OpenAI. We’re now accepting applications for the next cohort of OpenAI Fellows, a program which offers a compensated 6-month apprenticeship in AI research at OpenAI. We designed this program for people who want to be an AI researcher, but do not have a formal background in the field. Applications for Fellows starting in September are open now and will close on July 8th at 12AM PST. Apply here.

read more

OpenAI Icon OpenAI

Preparing for malicious uses of AI

Elon Musk – SpaceX, Tesla, and co-creator of OpenAI – says this in a related video on YouTube: I am concerned about certain directions AI could take that would be not good for the future. I think it would be fair to say that not all AI future’s are benign. If we create some artificial super intelligence that supersedes us in every way by a lot, it’s very important that that be benign. Elon goes on to talk more specifically about his fears of AI and that if we have this incredible power, that it not be concentrated in the hands of a few. He doesn’t exactly say Google, but everyone knows that’s who he means. From OpenAI: We’ve co-authored a paper that forecasts how malicious actors could misuse AI technology, and potential ways we can prevent and mitigate these threats. This paper is the outcome of almost a year of sustained work with our colleagues at the Future of Humanity Institute, the Centre for the Study of Existential Risk, the Center for a New American Security, the Electronic Frontier Foundation, and others.

read more

0:00 / 0:00