AI, or artificial intelligence, is powering a massive shift in the roles that computers play in our personal and professional lives. Most technical organizations expect to gain or strengthen their competitive advantage through the use of AI. But are you in a position to fulfill that expectation, to transform your research, your products, or your business using AI?
Chris Hayhurst looks at the techniques that compose AI (deep learning, computer vision, robotics, and more), enabling you to identify opportunities to leverage it in your work. You will also learn how MATLAB® and Simulink® are giving engineers and scientists AI capabilities that were previously available only to highly-specialized software developers and data scientists.
New technologies like 5G and mMIMO with massive increase in volume and scale mean that existing infrastructure and components must evolve rapidly, as well as adopt technologies and techniques to enable deployments. As such, there is a challenge for companies to bring such features faster, more neatly, and more tightly packaged to markets.
As one response, Nokia SoC teams have been adopting Model-Based Design in many areas. This has been a stepwise buildup and concentration of various independent activities together in multiple abstraction levels. After the initial buildup, the way of working has started to help Nokia meet the increasing complexity challenges, gain speedup, and improve their visibility to design intents. During the flow development, collaboration with MathWorks has helped with findings and iterations on how to best adopt the improved way of working into existing development flows. This presentation will show how introducing Model-Based Design at Nokia SoC enabled excellent concurrent design flows.
Machine learning is driving innovation in many application areas, including predictive maintenance, digital health and patient monitoring, financial portfolio forecasting, and advanced driver assistance. Developing machine learning models and deploying them on embedded systems or cloud infrastructure often still requires significant expertise with signal processing, big data, and model optimization.
In the context of obtaining insights from real-world data, this talk addresses how MATLAB® empowers engineers and scientists without significant signal processing and machine learning expertise to tackle challenges like:
Learn about new capabilities in the latest releases of MATLAB® and Simulink® that will help your research, design, and development workflows become more efficient.
Deep learning can achieve state-of-the-art accuracy for many tasks considered algorithmically unsolvable using traditional machine learning, including classifying objects in a scene or recognizing optimal paths in an environment. Gain practical knowledge of the domain of deep learning and discover new MATLAB® features that simplify these tasks and eliminate the low-level programming. From prototype to production, you’ll see demonstrations on building and training neural networks and hear a discussion on automatically converting a model to CUDA® to run natively on GPUs.
Designing and deploying deep learning and computer vision applications to embedded CPU and GPU platforms is challenging because of resource constraints inherent in embedded devices. A MATLAB® based workflow facilitates the design of these applications, and automatically generated C or CUDA® code can be deployed on boards like Jetson TX2 and DRIVE™ PX and achieve very fast inference. The presentation illustrates how MATLAB supports all major phases of this workflow. Starting with algorithm design, the algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Next, these networks are trained using GPU and parallel computing support for MATLAB either on the desktop, cluster, or the cloud. Finally, GPU Coder™ generates portable and optimized C/C++ and/or CUDA® code from the MATLAB algorithm, which is then cross-compiled and deployed to CPUs and/or Tegra® board. Benchmarks show that performance of the auto-generated CUDA code is ~2.5x faster than MXNet, ~5x faster than Caffe2, ~7x faster than TensorFlow®, and on par with TensorRT™ implementation.
Heart rate variability (HRV) is a commonly used tool for evaluating autonomic nervous system function. HRV is widely used in different fields of health and well-being research as well as in translational research. For accurate HRV analysis, a good quality electrocardiogram recording is required, but different kinds of wearable and hand-held devices available on the market can also provide sufficient accuracy. This means that almost anyone has the possibility to use HRV in personal monitoring nowadays. Kubios HRV is a scientifically validated HRV analysis software, providing detailed and illustrative monitoring of autonomic nervous system function. The software is widely used by researchers, well-being professionals, and people who want to monitor their daily stress levels or evaluate how exercise and training affects their health. This presentation will show how MATLAB® was used during the development of Kubios HRV, including the usage of MATLAB Compiler™ to create a standalone application.
Interest in predictive maintenance is increasing as more and more companies see it as a key application for data analytics that run on the Internet of Things. This talk covers the development of these predictive maintenance algorithms, as well as their deployment on the two main nodes of the IoT—the edge and the cloud.
As the size and variety of your engineering data has grown, so has the capability to access, process, and analyze those (big) engineering data sets in MATLAB®. With the rise of streaming data technologies and large-scale cloud infrastructure, the volume and velocity of this data has increased significantly, and this has motivated new approaches to handle data-in-motion. This presentation and demo highlights the use of MATLAB as a data analytics platform with best-in-class stream processing frameworks and cloud infrastructure to express MATLAB based workflows that enable decision-making in “near-real-time” through the application of machine learning models. It demonstrates how to use MATLAB Production Server™ to deploy these models on streams of data from Apache® Kafka®. The demonstration shows a full workflow from the development of a machine learning model in MATLAB to deploying it to work with a real-world sized problem running on the cloud.
Increasing environmental awareness has made electrification a major technology driver. Autonomous electric vehicles in highly dynamic scenarios (braking) are characterized by physical and algorithmic complexity. Through this example, this presentation highlights how Model-Based Design allows integration of components while capturing critical multidomain interactions (heat, electricity, and movement). Such simulation platform is instrumental for developing safe and performant technology in an agile manner shortening time-to-market. Reuse of such models for detection of faults and degradation along with real-time testing will be also tackled.
Years of engineering expertise and best practices form the basis for the industry standards used in developing high integrity and mission critical systems. The standards include proven guidelines which can improve the quality of any design. Learn how you can take advantage of best practices from standards such as ISO 26262, DO-178/DO-331, IEC 61508, MISRA®, and others to find errors earlier in your process and improve the quality of your Simulink® models.
Real-time testing has become more and more important for staying innovative, shortening time-to-market by starting to test earlier, and avoiding expensive prototypes. In this presentation, you will see how to enable a high degree of reuse going from desktop development to verification of your design in the real world, both for a rapid prototyping and a hardware-in-the-loop (HIL) scenario. For rapid prototyping, the presentation will cover the latest advancements targeting FPGA technology. For HIL, it will show how to set up a structured framework that can be reused for both desktop and real-time tests.
Increased environmental awareness has positioned energy management as an enabler for more efficient ship design and sustainable operation. The ability to analyze crucial physical system interactions at an early stage is key for smart decisions leading to lower fuel consumption and engine emissions. The work carried out by Deltamarin in recent years has proven that reduction of these indicators to within 10–30% is possible. Furthermore, the reuse of residual energy has been uncovered.
To provide credible results, simulation models must accommodate complex physical systems, but even operational data used to calibrate models and describe realistic scenarios. MATLAB®, Simulink®, and Simscape™ have successfully been used to create a state-of-the-art design tool characterized by flexibility in decisive aspects such as data management and fidelity. Although initially thought for design tasks, this framework holds great business value in differentiating areas like customer interaction, predictive maintenance, and the offering of technical services.
Electric motors are everywhere and are finding new applications every day. The technology to control motors is also evolving to be based on new platforms, such as Xilinx® Zynq®, that combine embedded processors with the programmable logic of FPGAs.
In this talk, you will learn how C and HDL code generation are used to produce implementations on Xilinx Zynq SoCs. You will also explore practical methods for developing motor controllers targeting Zynq SoCs, including the use of new HDL debugging capabilities.