border zones
human-computer, human-human, computer-computer
points of interaction
ways a person interacts with a product
instructing, conversing, manipulating, exploring, and responding
helpful for formulating conceptual models without implementation
where users issue instructions to system
keyboard shortcuts, menus, command line interfaces
supports many activities
quick and efficient
repeated actions on multiple objects
complexity challenges
having a conversation with a system
respond similar to human-human interaction
help/assistive facilities
chatbots
speech-based interfaces
familiar with less complexity, less discoverability
single action, not repetition
manipulating objects
capitalize on knowledge of physical world
analogous to interaction with physical objects
continuous representation
rapid reversible incremental actions
immediate feedback
physical actions instead of issuing text commands
moving through virtual or physical environments
exploit knowledge of navigating existing spaces
system taking initiative
alert, describe, or show the user
common or traditional
command-line, GUI, multimedia, web-based
surface
pen-based, multi-touch
reality-based
tangible, virtual and augmented reality
body-based
gesture, haptic, gaze, wearables, brain-computer
agents
voice, robots and drones, smart, appliances
type commands
keyboard shortcuts
more efficient and quicker for some tasks
low discoverability
WIMP: windows, icons, menus, and pointer
address learnability issues
recognition over recall
overcome physical constraints of displays
multiple windows open
task switching
scrolled, stretched, overlapped, opened, closed, and moved
depict applications, objects, commands, tools, and status
recognition: easier to learn and remember than text labels
signifiers and feedback
similar, analogical, or arbitrary
convention
list of options
top to bottom: frequency
group by similarity
flat, expanding, mega, collapsible, contextual
one level
small number of items
more options shown by incremental revealing
easier navigation
2D drop-down layout
view lots without scrolling
take up large visual space
good for browsing
accordian menus
collapsing content
provide structure overview
combine different media
images, text, video, sound, animation
most interfaces
training, educational, and entertainment
early forms text-based
usability versus attractiveness
desktop versus mobile
automatically resize, hide, and reveal interface elements
specify a viewport
<meta name="viewport" content="width=device-width, initial-scale=1.0">
vw / vh: % of viewport width and height
<h1 style="font-size:10vw">Hello World</h1>
single- and multi-touch
smartphones and tablets
tabletops and digital whiteboards
gestures
write, draw, command with a pen
take advantage of well-honed skills
tablets
annotation
computer-generated graphic simulations
feel virtually real when interacting
different view points
fidelity
entry cost and requirements
superimposing virtual on to physical reality
mobile devices, headsets, head-up displays (HUDs)
what type of augmentation, when, where
information polluting
use of physical objects and sensors
no single locus of control
affordances
education
embodied cognition
moving arms and hands to communicate
computer vision
machine learning
provide tactile feedback
vibrations and force
gaze: eye movements
very fast
provides implicit context
worn on the body
smart watches
sensors
devices, but also clothing
communicate interaction via brainwaves
electrodes on the scalp to detect firing neurons
using voice to interaction
command or conversing interactions
speech recognition
voice assistants
manufacturing
search and rescue
domestic robots
joysticks and controllers
follow human
agency
artificial intelligence
context-aware
improve efficiency and cost effectiveness
human-building interaction
everday machines in the home
do something specific quickly
connectivity
main driving factor
how do we know?
we often don't
Chapter 8: Data Gathering
Interaction Design: Beyond Human-Computer Interaction