Data Science Icon

Data Science

43 Stories
All Topics

Practical AI Practical AI #59

Flying high with AI drone racing at AlphaPilot

Chris and Daniel talk with Keith Lynn, AlphaPilot Program Manager at Lockheed Martin. AlphaPilot is an open innovation challenge, developing artificial intelligence for high-speed racing drones, created through a partnership between Lockheed Martin and The Drone Racing League (DRL). AlphaPilot challenged university teams from around the world to design AI capable of flying a drone without any human intervention or navigational pre-programming. Autonomous drones will race head-to-head through complex, three-dimensional tracks in DRL’s new Artificial Intelligence Robotic Racing (AIRR) Circuit. The winning team could win up to $2 million in prizes. Keith shares the incredible story of how AlphaPilot got started, just prior to its debut race in Orlando, which will be broadcast on NBC Sports.

read more

StackShare Icon StackShare

Cultivating your data lake

This post by Lauren Reeder of Segment goes over the different layers to consider when working with a data lake. What’s a data lake, you ask? A data lake is a centralized repository that stores both structured and unstructured data and allows you to store massive amounts of data in a flexible, cost effective storage layer. Her article explains what tools are needed and provides code & SQL statements to get started. 🤟

read more

Andrew Ste cvcompiler.com

The most in-demand data science skills of 2019

Since data science has a huge impact on today’s businesses, the demand for DS experts is growing. At the moment I’m writing this, there are 144,527 data science jobs on LinkedIn alone. But still, it’s important to keep your finger on the pulse of the industry to be aware of the fastest and most efficient data science solutions. Click through for key takeaways and trend analysis.

read more

Practical AI Practical AI #50

Celebrating episode 50 and the neural net!

Woo hoo! As we celebrate reaching episode 50, we come full circle to discuss the basics of neural networks. If you are just jumping into AI, then this is a great primer discussion with which to take that leap. Our commitment to making artificial intelligence practical, productive, and accessible to everyone has never been stronger, so we invite you to join us for the next 50 episodes!

read more

Practical AI Practical AI #49

Exposing the deception of DeepFakes

This week we bend reality to expose the deceptions of deepfake videos. We talk about what they are, why they are so dangerous, and what you can do to detect and resist their insidious influence. In a political environment rife with distrust, disinformation, and conspiracy theories, deepfakes are being weaponized and proliferated as the latest form of state-sponsored information warfare. Join us for an episode scarier than your favorite horror movie, because this AI bogeyman is real!

read more

Practical AI Practical AI #47

GANs, RL, and transfer learning oh my!

Daniel and Chris explore three potentially confusing topics - generative adversarial networks (GANs), deep reinforcement learning (DRL), and transfer learning. Are these types of neural network architectures? Are they something different? How are they used? Well, If you have ever wondered how AI can be creative, wished you understood how robots get their smarts, or were impressed at how some AI practitioners conquer big challenges quickly, then this is your episode!

read more

Practical AI Practical AI #45

How to get plugged into the AI community

Chris and Daniel take you on a tour of local and global AI events, and discuss how to get the most out of your experiences. From access to experts to developing new industry relationships, learn how to get your foot in the door and make connections that help you grow as an AI practitioner. Then drawing from their own wealth of experience as speakers, they dive into what it takes to give a memorable world-class talk that your audience will love. They break down how to select the topic, write the abstract, put the presentation together, and deliver the narrative with impact!

read more

Practical AI Practical AI #44

AI adoption in the enterprise

At the recent O’Reilly AI Conference in New York City, Chris met up with O’Reilly Chief Data Scientist Ben Lorica, the Program Chair for Strata Data, the AI Conference, and TensorFlow World. O’Reilly’s ‘AI Adoption in the Enterprise’ report had just been released, so naturally Ben and Chris wanted to do a deep dive into enterprise AI adoption to discuss strategy, execution, and implications.

read more

Practical AI Practical AI #42

TensorFlow Dev Summit 2019

This week Daniel and Chris discuss the announcements made recently at TensorFlow Dev Summit 2019. They kick it off with the alpha release of TensorFlow 2.0, which features eager execution and an improved user experience through Keras, which has been integrated into TensorFlow itself. They round out the list with TensorFlow Datasets, TensorFlow Addons, TensorFlow Extended (TFX), and the upcoming inaugural O’Reilly TensorFlow World conference.

read more

Practical AI Practical AI #40

Deep Reinforcement Learning

While attending the NVIDIA GPU Technology Conference in Silicon Valley, Chris met up with Adam Stooke, a speaker and PhD student at UC Berkeley who is doing groundbreaking work in large-scale deep reinforcement learning and robotics. Adam took Chris on a tour of deep reinforcement learning - explaining what it is, how it works, and why it’s one of the hottest technologies in artificial intelligence!

read more

Practical AI Practical AI #39

Making the world a better place at the AI for Good Foundation

Longtime listeners know that we’re always advocating for ‘AI for good’, but this week we have taken it to a whole new level. We had the privilege of chatting with James Hodson, Director of the AI for Good Foundation, about ways they have used artificial intelligence to positively-impact the world - from food production to climate change. James inspired us to find our own ways to use AI for good, and we challenge our listeners to get out there and do some good!

read more

Hamel Husain towardsdatascience.com

How to automate tasks on GitHub with machine learning for fun and profit

This is an explainer on how to build a GitHub App that predicts and applies issue labels using Tensorflow and public datasets. Hamel Husain writes: In order to show you how to create your own apps, we will walk you through the process of creating a GitHub app that can automatically label issues. Note that all of the code for this app, including the model training steps are located in this GitHub repository. See also: Issue Label Bot

read more

Practical AI Practical AI #36

Growing up to become a world-class AI expert

While at the NVIDIA GPU Technology Conference 2019 in Silicon Valley, Chris enjoyed an inspiring conversation with Anima Anandkumar. Clearly a role model - not only for women - but for anyone in the world of AI, Anima relayed how her lifelong passion for mathematics and engineering started when she was only 3 years old in India, and ultimately led to her pioneering deep learning research at Amazon Web Services, CalTech, and NVIDIA.

read more

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA Jetson Nano - A $99 computer for embedded AI

Google, Intel, and others have recently been targeting AI at the edge with things like Coral and the Neural Compute Stick, but NVIDIA is taking things a step farther. They just announced the Jetson Nano, which is a $99 computer with 472 GFLOPS of compute performance, an integrated NVIDIA GPU, and a Raspberry Pi form factor. According to NVIDIA: The compute performance, compact footprint, and flexibility of Jetson Nano brings endless possibilities to developers for creating AI-powered devices and embedded systems. And it’s not only for inference (which is the main target of things like Intel’s NCS). The Jetson Nano can also handle AI model training: since Jetson Nano can run the full training frameworks like TensorFlow, PyTorch, and Caffe, it’s also able to re-train with transfer learning for those who may not have access to another dedicated training machine and are willing to wait longer for results. Check it out! You can pre-order now.

read more

The Allen Institute for AI Icon The Allen Institute for AI

China to overtake US in AI research

China has committed to becoming the world leader in AI by 2030, with goals to build a domestic artificial intelligence industry worth nearly $150 billion (according to this CNN article). Prompted by these efforts, the Semantic Scholar team at the Allen AI Institute analyzed over two million academic AI papers published through the end of 2018. This analysis revealed the following: Our analysis shows that China has already surpassed the US in published AI papers. If current trends continue, China is poised to overtake the US in the most-cited 50% of papers this year, in the most-cited 10% of papers next year, and in the 1% of most-cited papers by 2025. Citation counts are a lagging indicator of impact, so our results may understate the rising impact of AI research originating in China. They also emphasize that US actions are making it difficult to recruit and retain foreign students and scholars, and these difficulties are likely to exacerbate the trend towards Chinese supremacy in AI research.

read more

OpenAI Icon OpenAI

OpenAI creates a "capped-profit" to help build artificial general intelligence

OpenAI, one of the largest and most influential AI research entities, was originally a non-profit. However, they just announced that they are creating a “capped-profit” entity, OpenAI LP. This capped-profit entity will supposedly help them accomplish their mission of building artificial general intelligence (AGI): We want to increase our ability to raise capital while still serving our mission, and no pre-existing legal structure we know of strikes the right balance. Our solution is to create OpenAI LP as a hybrid of a for-profit and nonprofit—which we are calling a “capped-profit” company. The fundamental idea of OpenAI LP is that investors and employees can get a capped return if we succeed at our mission, which allows us to raise investment capital and attract employees with startup-like equity. But any returns beyond that amount—and if we are successful, we expect to generate orders of magnitude more value than we’d owe to people who invest in or work at OpenAI LP—are owned by the original OpenAI Nonprofit entity. To some this makes total sense. Others have criticized the move, because they say that it misrepresents money as the only barrier to AGI or implies that OpenAI will develop it in a vacuum. What do you think? Learn more about OpenAI’s mission from one of it’s founders in this episode of Practical AI.

read more

Practical AI Practical AI #34

The White House Executive Order on AI

The White House recently published an “Executive Order on Maintaining American Leadership in Artificial Intelligence.” In this fully connected episode, we discuss the executive order in general and criticism from the AI community. We also draw some comparisons between this US executive order and other national strategies for leadership in AI.

read more

Practical AI Practical AI #33

Staving off disaster through AI safety research

While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that bad actors can take advantage of. We cover everything from poisoned data sets and hacked machines to AI-generated propaganda and fake news, so grab your James Bond 007 kit from Q Branch, and join us for this important conversation on the dark side of artificial intelligence.

read more

AI (Artificial Intelligence) towardsdatascience.com

A response to OpenAI's new dangerous text generator

Those of you following AI related things on Twitter have probably been overwhelmed with commentary about OpenAI’s new GPT-2 language model, which is “Too Dangerous to Make Public” (according to Wired’s interpretation of OpenAI’s statements). Is this discussion frustrating or confusing for you? Well, Ryan Lowe from McGill University has published a nice response article. He discusses the model and results in general, but also gives some perspective on the ethical implication and where the AI community should go from here. According to Lowe: “The machine learning community really, really needs to start talking openly about our standards for ethical research release”

read more

0:00 / 0:00