Performance evaluation A Matter of Performance

Virtual prototypes have been successfully applied for verification purposes in the MOST environment. Aspects of the MOST bus such as ring break diagnosis, start-up behaviour, or the MOST High Protocol (MHP) are all verified with virtual models. In the following article, an approach is presented in which a virtual model is used for performance evaluations.

Performance evaluation of different MOST network scenarios is an important but time-consuming task. Using state-of-the-art approaches, the first results can only be achieved after a real hardware prototype is available. Furthermore, automated analysis of multiple communication scenarios is very difficult on real hardware, especially if different network configurations have to be taken into account. 

By using a virtual prototype, these efforts can be reduced. With a virtual model, it is possible to evaluate the performance characteristics of different system configurations at an early stage of development. A software or hardware prototype is not necessary for this evaluation. However, if prototypes are present later in the design process, they can be integrated into the virtual model approach. Another advantage of the virtual model is high configurability. It is easily possible to execute different scenarios or parameters with a single model. Therefore, a statistical analysis as desired in the performance evaluation can be produced easily. It is sometimes challenging to get the desired characteristics on real systems. In the virtual model, it is possible to access all information present. A comparison of information acquired from different parts of the overall system can be produced without further effort. On the other hand, in a real distributed system, it can be a challenging task to link the gathered information together. In the context of a performance evaluation, it can be time-consuming to reproduce an observed behaviour, because of the number of conditions that have to be met. The message sequence from different nodes, the number of message retries, or the buffer access can slightly differ in the same test case. Sometimes these slight differences have a huge effect on the observed behaviour. With a virtual model, the tests are determinis-tically reproducible.  

A model with good performance is essential

One of the key aspects of the model is the performance of the simulation at run-time. To run a statistical evaluation, it is necessary to have a large database. Therefore, a model with good performance is essential. A general rule is that a higher abstraction level results in better performance of the model. Because details of the system are ignored, the accuracy of the results can decrease. In a first step, a high abstraction level is chosen.

The model consists of communicating MHP devices. The underlying ring structure is ignored. It is shown that the delay from the physical network, or the forwarding time in a device, can be ignored. The device network is therefore modelled as a fully meshed network. The devices can directly communicate with each other. Another abstraction is the exchanged data format. The exchanged data is not measured in bits. The devices transmit whole data frames at once. Figure 1 illustrates the chosen abstraction level. On two connections, a possible traffic shape is presented. Each device encapsulates a given MHP protocol implementation. The advantage of this approach is that the actual implementation is used for the performance evaluation. No abstract description of the desired protocol is needed. This reduces the overall design complexity and shows the possibilities of coupling developed software with virtual models.

In the context of the performance evaluation, the device structure is similar to real systems, including modules for the Intelligent Network Interface Controller (INIC) and an External Host Controller (EHC). Encapsulation of different MHP versions is possible. The parameters needed are those to personalize each device, such as the source address. Additionally, each device can be configured with performance-relevant parameters like the buffer size, sending rate, or the desired packet size.

Each device is associated with an application that consists of a number of traffic generators. The main parameter of an application is its interrupt rate. This rate determines how often the application can receive a data package. The application consists of a main loop that polls the receive buffer and transmits data if necessary. The interrupt rate determines how often this loop is triggered. The associated traffic generators shape the outgoing data stream of a device.

At the moment, three different types of traffic generators are implemented. The first one tries to send one data packet once. It can be parameterized with a start time and the data size, along with a range. The actual values are randomly chosen according to the specified ranges. The second traffic shaper generates a continuous stream of data packages. Parameters for this module are the start time, the data size of one packet, and the send period. At the start of each period, the generator tries to transmit a data packet. This traffic shaper, for example, can be used to generate a basic load on the bus.

The last traffic generator creates a continuous stream, but unlike the previous module, the data size and send period are randomly assigned. In this manner, an unpredictable user interaction can be simulated. Figure 2 presents the structure of a device with two traffic generators.

In certain cases the model uses randomly generated values. As mentioned, some traffic generators use random values. To avoid synchronous start-up behaviour, the start-up times of the different devices are randomly delayed as well. A seeded pseudo-random number generator is used to guarantee the reproducibility of different runs. A test run can be reproduced with the same seed value.