Getting waveshare touchscreen to work with WPE

Hi,

I’m trying to build a biometric (facial recognition) access control panel using balena.
I’ve got the camera and the display working (I’m using this one here - Waveshare 5inch HDMI LCD

However, I can’t get the touchscreen to work.

Specifically, the instructions from here depend on X server being installed since it uses evdev and xcalibrator. None of the containers in my app (wpe, camera and mqtt) have X11 installed.

Please help

My boot variables are as follows:

RESIN_HOST_CONFIG_dt_param = "i2c_arm=on","spi=on","audio=on"
RESIN_HOST_CONFIG_enable_uart = 1
RESIN_HOST_CONFIG_gpu_mem = 192
RESIN_HOST_CONFIG_dtoverlay = "pi3-disable-wifi","pi3-disable-bt","ads7846,cs=1,penirq=25,penirq_pull=2,speed=50000,keep_vref_on=0,swapxy=0,pmax=255,xohms=150,xmin=200,xmax=3900,ymin=200,ymax=3900"
RESIN_HOST_CONFIG_hdmi_cvt = 800 480 60 6 0 0 0
RESIN_HOST_CONFIG_hdmi_drive = 1
RESIN_HOST_CONFIG_hdmi_force_hotplug = 1
RESIN_HOST_CONFIG_hdmi_group = 2
RESIN_HOST_CONFIG_hdmi_mode = 87
RESIN_HOST_CONFIG_max_usb_current = 1
RESIN_HOST_CONFIG_start_x = 1

Hi @surekap, sounds like a cool project!

How exactly are you trying to use the touchscreen? If you want, you can install X11 in a separate container, mostly doing something like what is shown here https://github.com/balena-io/resin-electronjs/blob/master/Dockerfile.template (though that Dockerfile installs other things too).

If instead you want to use the touchscreen without an X server, I imagine your code must by trying to access a device in /dev/input/? Could you share a bit more about your code that uses the touchscreen?

And maybe looking at /dev/input will show if the touchscreen is being detected. If it’s not, it might have to do with loading a kernel module. For this it would be good to check the output of lsmod, and compare it with what you get when you use the touchscreen in another OS that you know works.

Hi @pcarranzav,

Thanks for the response.

My docker-compose basically builds https://github.com/balena-io-projects/balena-wpe.git and my wpe-init file contains the same environment variable as the balena repo
WPE_BCMRPI_TOUCH=1 WPELauncher $WPE_URL

I assumed this enabled touch input for the webpage which is loaded. So I don’t have any special code that is processing /dev/input

There WPE_URL is a webpage on another container which is served by a Flask server.
The webpage itself is just a few buttons and tabs which do AJAX calls. It also has a WebRTC frame to show the camera feed.

Based on your comments I have a few questions:

  1. Does WPE include an input stack which translates mouse input/touch input into mouseclicks on a webpage?
  2. Does WPE have a javascript engine which can handle AJAX and WebRTC (to display the camera feed)?
  3. Is WPE the right choice? I need a lightweight and simple GUI for the access panel since the face-recognition is using https://github.com/ageitgey/face_recognition/tree/master/face_recognition to process up to 2000 faces at 10fps on a 640x480 camera. This is already quite CPU and memory intensive and the RPI B+ may have trouble. I have looked at using electron or even GTK/Tkinter but they are not exactly lightweight.

Hi @surekap

  1. Yes it should handle input.
  2. Yes it should handle js fine.
  3. It seems to be the right choice. If you don’t have much cpu time available, don’t make it render complicated stuff.

I dont know that this helps, but maybe you can test your setup with this chromium image we were talking about a bit back. Im using this with a waveshare successfully
(with the few fixes mentioned in the issues)