This 6-part series of 90-minute virtual seminars focus on the durability, reliability and associated data analytics challenges of in-field, proving ground and laboratory vehicle testing and analysis, with special sessions on electric vehicle batteries, and the challenges of measuring and calculating their electrical power.
The presentation details the results of a series of tests to determine the Dynamic Charge Acceptance (DCA) performance of small form-factor carbon-enhanced VRLA cells designed for use in Hybrid Electric Vehicle (HEV) applications, together with standard lead-acid and lithium iron phosphate (LFP) cells. The results demonstrate how varying the conditions and parameters of the standard DCA test regime can provide a superior evaluation of DCA performance and lead to a better understanding of cell behaviour under real-world conditions. It also demonstrates the importance of recognising the limitations of existing test procedures and how they should be considered before using results from such tests to make judgements about real-world battery performance.
The electrodes in a lithium-ion battery undergo reversible electrochemical reactions as lithium enters and leaves the atomic structure of the intercalated lithium compounds. Particle size, shape and crystal structure of electrode materials play an important role in Li ion diffusion and their transportation during charge-discharge cycling. Reversibility of these electrochemical reactions with cycling largely governs the lifetime of the batteries. With certain battery chemistries, these electrochemical reactions render the electrode materials unstable state, pushing them into irreversible physical or structural changes, ultimately leading to battery degradation with cycling. Understanding the intricacies of these electrochemical reactions is, therefore, an important step to improve battery performance. X-ray diffraction (XRD) and scattering is well-suited to study these atomic phase changes, as well as a tool to understand and optimize the pathways that lithium uses to move through the electrodes. However, XRD investigation of battery materials requires special considerations that are different form the routine powder diffraction measurements.
This presentation will review the techniques related to dimensional measurements in battery electrode materials and how these relate to the battery capacity. Special focus would be on X-ray diffraction and scattering methods including considerations for experimental design. Some examples will be shared on how these considerations are applied to cathode material analysis, including Rietveld refinement, to quantify phase mixtures and atomic structure. A case study on the analysis of NCM based batteries with in-operando XRD, to track phase changes and potentially the degradation mechanisms during charge-discharge cycling, will be shared.
This presentation offers a brief summary of battery durability and reliability design issues:
In this presentation we consider:
In this presentation we consider:
Electrical powertrain is a complex system that has both electrical and mechanical elements. In the electric powertrain there are 5 main subsystems, batteries, inverters, motors, torque conversion, and control system. Each one of these systems has losses, dynamics, thermal limitations, and a control system. By simplifying a measurement chain to bring all these signals into one location and continuously recording the data there are many gains to the engineers. This streamlined system of testing allows for faster data collection, processing, and model correlation. This session will explore test and measurement of electric motors and inverters at a system level, including electrical power, mechanical power, temperature, NVH, and the dynamics of each of these systems. Some specific topics will include Torque ripple, NVH, efficiency, and Control system calibration.
The theoretical and real-world range of an electric vehicle may differ significantly. To maximize the range and overall efficiency of the vehicle, it is necessary to understand and characterize how the vehicle is used and determine through meticulous measurement and analysis where efficiency losses occur.
Quantifying AC power is particularly difficult. Unlike the conventual electricity grid, electric vehicles convert DC to AC using an electrical inverter. Using pulse-width modulation, these produce a frequency-modulated, non-sinusoidal, transient waveform.
This presentation introduces the concept of AC power analysis post-processing. Starting with steady-state sinusoidal waveforms, it explains the basic concepts of Active, Reactive, Apparent Power, and the Power Factor. It then considers the effect of non-sinusoidal and transient waveforms. Digital Signal Processing (DSP) techniques are introduced that take advantage of high-speed digitized data.
The presentation covers the following methods of AC power analysis:
Time-averaging methods:
Instantaneous methods:
A case study shows the advantages of each method based on real data from an electric vehicle.
Vehicle design for durability is a long-established discipline, affecting all aspects of vehicle engineering. However, despite well proven techniques and processes, the loads used for component and system design are measured on a mule or even a competitor vehicle at the beginning of a vehicle programme and may not be revisited until well into an advanced prototype stage. At this stage, any durability concerns due to the use of approximated load cases can be difficult to implement due to costly tooling updates.
This presentation demonstrates the use of advanced simulation methods to investigate the variability of a vehicle design on the wheel loads and to produce a set of load cases which are robust and encompass the loads that could be achieved considering all possible tuning variations. As well as using robot driver technology this process can be extended to introduce the human element through the use of VI-grades advanced driving simulators with a human driver to study the impact of human variability of predicted wheel forces.
The presentation explains how 5G technology is used for connected vehicle testing at Millbrook Proving Ground. It gives an overview of the onsite networks available, and an explanation of our approach to connectivity, vehicle data, and data processing. It includes examples of real-world use cases involving data acquisition from test vehicles.
Engineers in the fields of test & measurement and maintenance are facing the need to process a mixture of various data types, including low-cost digital data from communication buses and connected vehicles. Bus data is an inexpensive source of readily available parameters in a complex electronic system. Accessing bus data offers huge potential benefits such as understanding customer usage from connected vehicles in order to improve the product validation process and thereby reduce unexpected failures. However, bus data also raises a number of challenges in terms of its analysis because of the quality of the data, quantity of data, inconsistency of data, lack of certain data, etc.
The solution would be a dedicated digital bus data processing tool that would perform analytics from huge quantities of unevenly sampled, heterogenous sensor data in a scalable fashion and at high speed.
The main benefit for the engineers would be for them to convert overwhelming data volumes into actionable decisions without the intervention of data scientists.
The U.S. Army is working to improve Reliability, Availability and Maintainability (RAM) as well as durability for its tactical wheeled vehicle fleet while reducing operating costs through the use of Reliability Centered Maintenance (RCM), Condition Based Maintenance (CBM) and Health and Usage Monitoring Systems (HUMS).
The Vehicle Performance, Reliability & Operations ‐ Analysis (VePRO‐A) program focuses on the scalability of a HUMS and CBM software system with the ultimate goal of reducing cost and extending equipment life. The objective of this program is to configure scalable and robust software system components along with the end-to-end integrated system for deployment to broaden the understanding of operational usage severity and deterioration as it relates to RAM, cost, readiness, durability, etc. The VePRO‐A system approach demonstrates a progressive increase in the ability to analyze operational usage data with a program goal of managing up to 20,000 vehicles and a scalable solution that can be used as a baseline towards achieving the U.S. Army’s long-term desired goal of monitoring up to 80,000 Tactical Wheeled Vehicles (TWVs). Overall system architecture and features will be discussed and presented.
When designing and testing large Condition Base Monitoring (CBM) type systems (10,000+ assets) an important step is to ensure the system can handle the expected volumes of data and can sustain growth as more assets are added over time. When commissioning a new system, customers typically do not have 10,000+ assets instrumented in the field producing data on a daily basis, and the process of asset onboarding can be expensive and have long lead times. In order to design and build large usable CBM systems, alternative methods are needed to generate a suitable volume of asset data that allows engineers to configure and test the system prior to investing in large-scale “real” asset onboarding. This data needs to be representative of actual usage/conditions while having sufficient variability from asset to asset, channel to channel, day to day etc. Achieving this representative data volume is more than simply duplicating existing sample records.
This presentation discusses the implementation and usage of a Vehicle Data Simulator. This simulator allows for the generation of multiple vehicle operational data sets that contain sufficient data variability while being representative of real vehicle operating conditions. The data from the simulator draws from many external sources and is of suitable fidelity that it can be used within the CBM system for detailed calculations and derivations similar to those performed with real vehicle data. Although this simulator focuses on the creation of wheeled vehicle operational data, the methods described in this presentation could easily be adapted and applied to alternative asset groups such as tracked vehicles, rail vehicles or aircraft. The Vehicle Data Simulator design and usage scenarios will be presented.
Probabilistic design approaches provide a route to assessing the reliability of safety-critical structures, potentially yielding more optimum designs, increased service lives or a quantification of the level of safety within a structure. However, one of the inhibiting factors that routinely prevents the implementation of probabilistic approaches is the lack of available data to statistically characterise the variability present within engineering parameters. Fortunately, the rapidly growing field of ‘big-data’ now provides greater opportunities for capturing engineering design datasets than ever before. This presentation will demonstrate a novel data source, in the form of real-time aircraft tracking, which has been exploited within the development of a probabilistic fatigue methodology for aircraft landing gear structures.
"Fatigue is the progressive weakening of a material caused by cyclic or otherwise varying loads, even though the resulting stresses are well within the static strength limits. The art of fatigue simulation is to be approximately right rather than exactly wrong." - Prof. Keith Miller
The fatigue design of mechanical systems has historically followed a 'deterministic' process. That means, for a given set of inputs they will return a consistent set of fatigue life results with no scatter. In reality the inputs are statistically uncertain -- they have an expected value and a variability. Deterministic design methods take no implicit account of uncertainty. In practice, the designer applies a safety factor to each input parameter along with an additional safety factor to the final result to allow for 'modelling errors'. In most cases, the engineer is fairly certain that the simulation results are conservative, but cannot state with any confidence what the final safety margin, reliability or failure rate will be. Furthermore, it is almost impossible to qualify the simulation by experimental testing because the test lives are significantly higher than the conservative simulations would suggest.
In comparison, a 'Probabilistic Fatigue Simulation' method is 'stochastic' in nature. In this presentation we review the concepts of 'Stochastic Design' under the following headings:
3. Uncertainty quantification: including Design for 'Reliability', and Design for 'Robustness'.
Probabilistic Fatigue Simulation offers many significant advantages over the traditional deterministic design approach:
A case study is presented.
This presentation outlines how combining multiple data sources can be used to get key information about an asset’s health and to improve operational efficiency. Avoiding failure of the asset during operation is very critical especially if the asset is very hard to access by the maintenance team. Every time an asset fails, production efficiency is affected. An effective maintenance plan will help improve the efficiency. Management of effective maintenance plan requires reliable data that provides information about the asset. Getting access to such data can be very challenging. Prenscia Engineering Solutions is working with multiple industry and academic partners to provide a complete solution that can be implemented without spending years of planning and execution. The approach combines both operational, sensor, and maintenance data along with the digital twin of the asset to improve the data quality, capture and verify failure events, and perform analysis to help improve operational efficiency.
This presentation describes practical approaches for mining and process industries to overcome data quality issues and pursue more data-driven reliability practices. Like most industries, mining and process industries are constantly looking to improve equipment reliability and institute new, best work practices. However, a clear understanding of the requirements regarding data, systems, and work processes is often not widely known and accessible. It describes the use of proven techniques, including life data analysis, ‘reliability-twins’, and new predictive analysis methods, and how they build upon foundations like ‘reliability-centered maintenance’ for more data-driven reliability.