RAG continues to rise
Daniel & Chris delight in conversation with “the funniest guy in AI”, Demetrios Brinkmann. Together they explore the results of the MLOps Community’s latest survey. They also preview the upcoming AI Quality Conference.
Daniel & Chris delight in conversation with “the funniest guy in AI”, Demetrios Brinkmann. Together they explore the results of the MLOps Community’s latest survey. They also preview the upcoming AI Quality Conference.
Discussion
Sign in or Join to comment or subscribe
Julien Siebert
2024-04-18T18:23:31Z ago
Neuromorphic computing mostly relies upon Spiking Neural Networks (SNN). In comparison to “classical” neural networks where the neurons take numbers as input and output a number, SSN take signals as input (a bunch of numbers over time) and output a signal. The activation function generates spikes when enough spikes come together around the same time (hence the name Spiking Neural Networks). This looks much more like what the brain actually does (at least in comparison to the “classical” neural networks). The main advantage is that you can potentially work on analog signals and save lots of energy. The main disadvantage is the training: you need to simulate the signals going through the network over time and this is relatively costly (takes lots of time). For now these networks are used in very specific chips called neuromorphic (see https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html)