Touchscreens have revolutionized the lives of smart device users by making many tasks easier. However, not all touchscreens are alike in terms of their performance, and not all share the same drawbacks.  

One of the first commercially available computers that featured a touchscreen interface was the HP-150, released in 1983. It used infrared light beams to detect the finger interacting with the display. The light beams were emitted from holes around the bezel, which made them prone to getting clogged by dust, hence requiring additional cleaning when in continuous use.  

Ad for HP150 touchscreen computer

Ad for HP150 touchscreen computer - Image Source:

State of touch 

Touch screens today have evolved to the point that they can measure finger movements way faster than the screen can update. These high sampling rates quickly became very appealing to final users (especially to the mobile gamers communities). For example, the ROG Phone 6 from Asus features a 720Hz touch sampling rate and a screen refresh rate of 165Hz.  

Modern-day touch screens have become so fast that the sensing itself is no longer the slowest part from the user experience point of view. Device performance started playing a more significant role, considering that the user interacts with an app on the phone, not just its touchscreen display. 


Smartphone response time measurements for consistent user experience 

One powerful technique for discovering potential performance issues is repeating the same test several times or for an extended period. For example, gaming benchmarks take record of the system’s performance metrics such as minimum, average, and maximum FPS achieved during a test run. Comparing multiple measurements with one another can generate meaningful conclusions. For instance, if the application runs fast, it should obtain a high average FPS. However, observing a significant difference between average and minimum FPS values can indicate potential performance issues like stuttering or lag. 

A similar approach could also be applied to measure latency accurately. Such a system would need to: 

-       Repeat the test multiple times 

-       Compute average, maximum, and minimum latencies observed 

-       Analyze the results to find inconsistencies 

The most frequent source of frustration for mobile users is when the keyboard fails to open. Even though the keyboard usually opens quickly, if it fails to do so once, the user may accidentally start typing in the wrong place. Various factors, such as background tasks, picture-in-picture, GPS usage, and network activity can generate these inconsistencies. By automatically measuring device latency, it is possible to identify such inconsistencies and improve the user experience of modern smartphones. 


Typing on a touchscreen smartphone keyboard

As previously stated, the user's experience is determined more by the performance of the device rather than the refresh rate of the touch screen. In the upcoming chapter, we will explain how a black-box method for measuring device latency should work.  

How to perform touchscreen latency measurements (use case) 

A black box testing technique refers to the evaluation of the entire system, end-to-end, from the perspective of the end user. This means that there is little focus on the architecture or code of a system, but there are expectations for receiving the appropriate response to requests simulating user’s activity. For this, everyday use of devices is simulated, to observe if devices deliver the promised outcome.   

As describing a whole device performance testing is a lengthy, complex process, the following paragraphs are going to address how a black box method for determining device latency is put into practice while using MATT, the smart device testing robot. The final objective is to determine how long it takes a smartphone’s touchscreen to “wake up” or the time registered between the moment the screen is tapped on and when it fully lights on.  


To guarantee result accuracy, the smartphone is mounted and secured on MATT’s testing platform. For this use case, MATT with a one-fingered effector is used (as the use case requires only a simple tap; for multitouch testing requirements, a three-fingered effector is recommended). A tripod and a tripod adapter are used to set the high-speed camera in place on the testing robot’s platform, externally of the testing area. The high-speed camera comes as an extrinsic element, necessary to the latency measurement process.  

The latency measurement process using a block box method 

The preliminary set-up consists of mounting the smartphone on MATT’s testing platform, the high-speed camera is positioned to assure maximum visibility of the phone’s touchscreen and observe the process unimpaired. While filming, the high-speed camera should capture three main actions, essential for the latency measurement process. First, is the moment MATT’s effector starts descending towards the testing area, signaling the beginning of the “tap input” execution, followed by the actual point in time when MATT’s finger touches the phone signaling the screen input has been accomplished, the third and final registered action being the moment when the screen starts to light up. 

The high-speed camera captures fast movements (in this case taps) at the precise time they happen, hence obtaining the necessary information to determine the variable of interest.  

Once the latency measurement recording setup is finalized, the next step is to define MATT’s actions. As in its everyday use by end users, to activate the smartphone’s touchscreen (‘wake’ the smartphone) a single tap must be performed for iOS devices and a double-tap gesture for Android devices. 

Gesture Automation 

In the process of determining the wake latency of a smartphone’s touchscreen, MATT is programmed to carry out two operations: performing the actual screen input that prompts the phone to “wake up”, and returning the moment of time when the tap has happened.  

To ensure a replica as accurate as possible of the end-users interaction with the device, the effector’s tapping speed is adjusted to 500 mm/s, equivalent to a person fast tapping the screen. The coordinates of where the input will take place are adjusted, making sure MATT executes the right action (in the center of the touchscreen for this particular use case since the necessary input is described by a simple tap).  

As observed, through straightforward procedures, MATT finalizes the first operation of the latency measurement process.  


Measuring touchscreen latency using a high-speed camera 

The externally mounted high-speed camera captures and records the whole carried-out process, execution of the tapping action, and wake moment of the touchscreen as a response to the input.  

After having the screen waking process recorded and information stocked frame by frame to disc, the data is processed to determine how long it takes the touchscreen to ‘wake up’ after it has been tapped on. By comparing frames’ entropies, the analysis is focused on observing disturbances of the output sizes determined by applying a lossy image encoding function to every frame. When changes start being visible in the values of the output sizes, the moment when the screen starts to ‘wake up’ and when it is fully ‘woken’ can be determined. As a result, it can be observed how the input of MATT’s finger on the touchscreen affects the encoding size.   

To achieve the above-defined actions, obtain the needed measurements, and finally process them to observe the time latency between the interaction with the touchscreen and the screen ‘wake-up moment’, extends to a process that needs to be explained and defined on its own. For this reason, the methodology of how touch screen latency has been determined using high-speed cameras has been extensively described in a separate whitepaper, covering technical aspects and results.  

Step-by-step automated device latency identification. Learn from the whitepaper 

Download the supporting whitepaper including the code used to perform and identify the device latency automatically to learn more. 



Adapta Robotics specializes in designing and building robotic solutions for businesses. To support automation and drive progress, the Adapta solutions are not only high-performance, but also user-friendly and adaptable, integrating computer vision, AI, and the latest in hardware technology. All is supported by in-house innovation in hardware, software development, and manufacturing, expertise in AI, and cross-industry knowledge, and executed by the Adapta team of dedicated engineers with strong robotics and R&D background.  




‘Lagging’ is known for causing frustration, leading to dissatisfaction for the device users. Measured in milliseconds, the response time to a tap on a smartphones screen might not be an issue that can not be overcome, but a delay between an action and its reaction in mobile gaming industry can make an important difference.

Helping in providing a user experience without lags, MATT performs in a testing protocol meant to determine the touch- to- display response time, additionally being an essential tool in benchmarking between different phone brands.

MATT as the ideal testing solution

Building MATT to be a robot adaptable to as many testing scenarios as possible was a mandatory requirement from the initial stages of its development. Having this in mind, MATT was built as a universal tool, being able to integrate with as many external devices as possible (cameras, light modules, measurement devices, sensors and much more). This brought an advantage to easily testing multiple mobile phone brands, as well as simplified the integration of the gear required for the latency measurements.

Testing scenario, variables and MATT’s input

Generally, in the everyday use, a good reaction time from a finger tapping a smartphone’s screen affects the user experience in many technological products. There are industries that represents the ideal testing scenarios for the pursued analysis, such as the mobile gaming field. The described use-case follows three variables: touch to display response time, display refresh period, touch sampling period (full explanation of each variable can be found by clicking the link below). Being measured in time increments as small as milliseconds, through MATT’s versatility, a high- speed camera is integrated with the robot in order to capture each frame as it is displayed on a smartphone’s screen. Therefore, the observation and processing of the touch time response is performed seamlessly.

Using MATT and the process behind the use- case

To build an accurate data sampling process, three main steps are executed as follows:

  1. The device calibration is performed using MATT’s framework.
  2. The robot is programed to execute the desired action (tap, rotate, swipe or more complex gestures) and notify the high- speed camera when the action has finished.
  3. With the support of MATT’s interface, any testing program (and its different actions and elements) can be easily created, and, furthermore, replicated on any other DUT.

Tackling benchmarking

One of the use-case’s objectives is to compare response time of devices, regardless of the phones manufacturing characteristics. As MATT stores created programs, by using test robust computer vision techniques, it allows for the same testing cycles to be performed on diverse smartphone brands. Having the devices tested under the same conditions, benchmarking the different touch- to- display response times is easily achieved.

There’s more to it

Until now, MATT’s implications in the testing protocol were explored, as well as the most important aspects of the use-case. But there is more. Find out the details behind how a MATT fleet has been used in the measurement and benchmarking of touch-to-display response time and the results to the conducted study by reading our client’s display protocol for measuring the touch response time on smartphones.


Skanska Green Court, 3rd floor, Building B, Bucharest, Romania

Follow us

Subscribe to our newsletter