I would like to use eyetracking to pre-select a screen button or icon and then use input from a physical device to select the screen button or icon. I am able to do this presently by setting the GazeSpeaker Settings option for ‘Input mode’ to ‘Mouse click’ after turning on my eyetracking device option for ‘Mouse cursor control’. I am able to pre-select with my eyes and then select with a left mouse click. I have great difficulty with this method for two reasons. 1) The selection point on the screen is to the right of and below where the mouse cursor appears during pre-selection. (This is also true when I set GazeSpeaker Settings ‘Input mode’ to ‘Mouse movement’ and don’t use eye tracking.) And 2) the ‘Mouse cursor control’ is very unstable and jumpy compared with GazeSpeakers eyetracking ‘input mode’. So I ask if there is a feature that would allow a user to pre-select with eye tracking and select with an input from a device instead of using dwell to select? Thank you for your time. And thank you for your life-changing communication tool.
Yes we will investigate if we can implement this in Gazespeaker. My first guess is that it should be feasible, but we need to perform some tests and a review of the impacts in the code. What kind of device would you use to perform the equivalent of “mouse click” ?
We have updated Gazespeaker to manage the click with a switch or a mouse click. This is now supported by version 1.3.3 (the update is proposed automatically if you have selected automatic update in the settings screen).
To allow the selection of a cell with a mouse click or a switch, you only need to define the setting “selection mode” = click (instead of stare which is the default value, i.e. dwell) in the settings screen