AI (Artificial Intelligence) Icon

AI (Artificial Intelligence)

Machines simulating human characteristics and intelligence.
382 Stories
All Topics

Practical AI Practical AI #218

Computer scientists as rogue art historians

What can art historians and computer scientists learn from one another? Actually, a lot! Amanda Wasielewski joins us to talk about how she discovered that computer scientists working on computer vision were actually acting like rogue art historians and how art historians have found machine learning to be a valuable tool for research, fraud detection, and cataloguing. We also discuss the rise of generative AI and how we this technology might cause us to ask new questions like: “What makes a photograph a photograph?”

Practical AI Practical AI #217

Accelerated data science with a Kaggle grandmaster

Daniel and Chris explore the intersection of Kaggle and real-world data science in this illuminating conversation with Christof Henkel, Senior Deep Learning Data Scientist at NVIDIA and Kaggle Grandmaster. Christof offers a very lucid explanation into how participation in Kaggle can positively impact a data scientist’s skill and career aspirations. He also shared some of his insights and approach to maximizing AI productivity uses GPU-accelerated tools like RAPIDS and DALI.

Chrome github.com

Automate your browser with GPT-4

Taxy uses GPT-4 to control your browser and perform repetitive actions on your behalf. Currently it allows you to define ad-hoc instructions. In the future it will also support saved and scheduled workflows.

Taxy’s current status is research preview. Many workflows fail or confuse the agent. If you’d like to hack on Taxy to make it better or test it on your own workflows, follow the instructions below to run it locally. If you’d like to know once it’s available for wider usage, you can sign up for our waitlist.

Ok that’s cool… 🤯

Here it is using Google Calendar with the prompt “Schedule standup tomorrow at 10am. Invite david@taxy.ai”

Automate your browser with GPT-4

AI (Artificial Intelligence) futureoflife.org

A petition to pause all AI experiments for at least 6 months

This open letter by the Future of Life institute has been signed by 1380 people (so far) including notable technologists such as Steve Wozniak, Stuart Russell, Emad Mostaque (Stability AI) & Elon Musk.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Practical AI Practical AI #216

Explainable AI that is accessible for all humans

We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants?

Beth Rudden of Bast.ai has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.

Steve Yegge about.sourcegraph.com

Cheating is all you need

Steve Yegge is very excited about LLMs and thinks the rest of us should be as well:

There is something legendary and historic happening in software engineering, right now as we speak, and yet most of you don’t realize at all how big it is.

LLMs aren’t just the biggest change since social, mobile, or cloud–they’re the biggest thing since the World Wide Web. And on the coding front, they’re the biggest thing since IDEs and Stack Overflow, and may well eclipse them both.

Steve’s been in the industry a long time. He worked at Amazon back when AWS was just a demo on some engineer’s laptop and he worked at Google when Kubernetes was just a demo on some engineer’s laptop.

The point: when Steve Yegge gets excited about something it probably means more than when most people get excited about something.

Changelog Interviews Changelog Interviews #532

Bringing Whisper and LLaMA to the masses

This week we’re talking with Georgi Gerganov about his work on Whisper.cpp and llama.cpp. Georgi first crossed our radar with whisper.cpp, his port of OpenAI’s Whisper model in C and C++. Whisper is a speech recognition model enabling audio transcription and translation. Something we’re paying close attention to here at Changelog, for obvious reasons. Between the invite and the show’s recording, he had a new hit project on his hands: llama.cpp. This is a port of Facebook’s LLaMA model in C and C++. Whisper.cpp made a splash, but llama.cpp is growing in GitHub stars faster than Stable Diffusion did, which was a rocket ship itself.

Josh Comeau joshwcomeau.com

The end of front-end development

Josh Comeau:

Over the past few months, I’ve spoken with lots of early-career devs who are getting more and more anxious about AI. They’ve seen the increasingly-impressive demos from tools like GPT-4, and they worry that by the time they’re fluent in HTML/CSS/JS, there won’t be any jobs left for them.

I couldn’t disagree more. I don’t think web developer jobs are going anywhere. And I’m getting pretty sick of the FUD? being spread online.

So, in this blog post, I’m going to share my hypothesis for what will happen. Things are going to change, but not in the scary way people are saying.

AI (Artificial Intelligence) Twitter

GPT-4 is phenomenal at code

Sualeh Asif from Control (an AI code editor):

We’ve been using GPT-4 for a few months internally, and we thought we’d highlight a few examples](https://github.com/anysphere/gpt-4-for-code) that have been both particularly impressive and really useful to us.

Here it’s converting a Python dict of member functions to esoteric but correct-on-first-try C++ code 👇

GPT-4 is phenomenal at code

Justin Searls blog.testdouble.com

How to tell if AI threatens YOUR job (and 3 simple rules to keep it)

Justin Searls dives deep into whether AI tools like ChatGPT actually threaten knowledge worker jobs and provides helpful ideas around what to do about it.

Having spent months programming with GitHub Copilot, weeks talking to ChatGPT, and days searching via Bing Chat as an alternative to Google, the best description I’ve heard of AI’s capabilities is “fluent bullshit.” And after months of seeing friends “cheat” at their day jobs by having ChatGPT do their homework for them, I’ve come to a pretty grim, if obvious, realization:

The more excited someone is by the prospect of AI making their job easier, the more they should be worried.

Practical AI Practical AI #214

End-to-end cloud compute for AI/ML

We’ve all experienced pain moving from local development, to testing, and then on to production. This cycle can be long and tedious, especially as AI models and datasets are integrated. Modal is trying to make this loop of development as seamless as possible for AI practitioners, and their platform is pretty incredible!

Erik from Modal joins us in this episode to help us understand how we can run or deploy machine learning models, massively parallel compute jobs, task queues, web apps, and much more, without our own infrastructure.

Practical AI Practical AI #213

Success (and failure) in prompting

With the recent proliferation of generative AI models (from OpenAI, co:here, Anthropic, etc.), practitioners are racing to come up with best practices around prompting, grounding, and control of outputs.

Chris and Daniel take a deep dive into the kinds of behavior we are seeing with this latest wave of models (both good and bad) and what leads to that behavior. They also dig into some prompting and integration tips.

Practical AI Practical AI #212

Applied NLP solutions & AI education

We’re super excited to welcome Jay Alammar to the show. Jay is a well-known AI educator, applied NLP practitioner at co:here, and author of the popular blog, “The Illustrated Transformer.” In this episode, he shares his ideas on creating applied NLP solutions, working with large language models, and creating educational resources for state-of-the-art AI.

Practical AI Practical AI #211

Serverless GPUs

We’ve been hearing about “serverless” CPUs for some time, but it’s taken a while to get to serverless GPUs. In this episode, Erik from Banana explains why its taken so long, and he helps us understand how these new workflows are unlocking state-of-the-art AI for application developers. Forget about servers, but don’t forget to listen to this one!

Player art
  0:00 / 0:00