throbber
Artificial Intelligence in Medicine (2008) 42, 153—163
`
`http://www.intl.elsevierhealth.com/journals/aiim
`
`MOPET: A context-aware and user-adaptive
`wearable system for fitness training
`
`Fabio Buttussi, Luca Chittaro *
`
`HCI Lab, Department of Mathematics and Computer Science, University of Udine,
`Via delle Scienze 206, 33100 Udine, Italy
`
`Received 20 November 2006; received in revised form 13 November 2007; accepted 21 November 2007
`
`KEYWORDS
`Wearable systems;
`Context-awareness;
`User-adaptation;
`Fitness training;
`Embodied agents
`
`Summary
`
`Objective: Cardiovascular disease, obesity, and lack of physical fitness are increas-
`ingly common and negatively affect people’s health, requiring medical assistance and
`decreasing people’s wellness and productivity. In the last years, researchers as well as
`companies have been increasingly investigating wearable devices for fitness applica-
`tions with the aim of improving user’s health, in terms of cardiovascular benefits, loss
`of weight or muscle strength. Dedicated GPS devices, accelerometers, step counters
`and heart rate monitors are already commercially available, but they are usually very
`limited in terms of user interaction and artificial intelligence capabilities. This
`significantly limits the training and motivation support provided by current systems,
`making them poorly suited for untrained people who are more interested in fitness for
`health rather than competitive purposes. To better train and motivate users, we
`propose the mobile personal trainer (MOPET) system.
`Methods and material: MOPET is a wearable system that supervises a physical fitness
`activity based on alternating jogging and fitness exercises in outdoor environments.
`By exploiting real-time data coming from sensors, knowledge elicited from a sport
`physiologist and a professional trainer, and a user model that is built and periodically
`updated through a guided autotest, MOPET can provide motivation as well as safety
`and health advice, adapted to the user and the context. To better interact with the
`user, MOPET also displays a 3D embodied agent that speaks, suggests stretching or
`strengthening exercises according to user’s current condition, and demonstrates how
`to correctly perform exercises with interactive 3D animations.
`Results and conclusion: By describing MOPET, we show how context-aware and user-
`adaptive techniques can be applied to the fitness domain. In particular, we describe
`how such techniques can be exploited to train, motivate, and supervise users in a
`wearable personal training system for outdoor fitness activity.
`# 2007 Elsevier B.V. All rights reserved.
`
`* Corresponding author. Tel.: +39 0432 558450; fax: +39 0432 558450.
`E-mail addresses: fabio.buttussi@dimi.uniud.it (F. Buttussi), luca.chittaro@dimi.uniud.it (L. Chittaro).
`
`0933-3657/$ — see front matter # 2007 Elsevier B.V. All rights reserved.
`doi:10.1016/j.artmed.2007.11.004
`
`Petitioner Apple Inc. – Ex. 1026, p. 153
`
`

`

`154
`
`1. Introduction
`
`Cardiovascular disease, obesity, and lack of physical
`fitness are increasingly common and negatively
`affect people’s health, requiring medical assistance
`and decreasing people’s wellness and productivity.
`These problems can be prevented or alleviated by
`regularly practicing physical activities and sports,
`but a lot of people are not motivated enough or get
`involved in physical activities rarely, wrongly or
`irregularly, wasting potential benefits and even risk-
`ing injuries.
`Information technology researchers as well as
`companies are devoting an increasing attention to
`sports, fitness and physical activities to support
`people with new devices and applications at home
`[1,2] and outdoors [3—9]. In particular, wearable
`solutions are very promising because they can assist
`the user anywhere and allow her to get the benefits
`of open-air environments, such as clean air and
`sunlight. However, user interfaces of current com-
`mercial products as well as their artificial intelli-
`gence capabilities are extremely limited. Moreover,
`current products do not focus much on user’s moti-
`vation and training: most solutions are based on a
`digital watch interface and measure or derive user’s
`parameters without trying to recognize interesting
`patterns and provide more sophisticated user-adap-
`tive and context-aware advice.
`To overcome the above mentioned limitations
`and provide users with personalized training and
`motivation support, this paper proposes the MOPET
`system, a wearable system that supervises a physi-
`cal fitness activity based on alternating jogging and
`fitness exercises in open-air environments. MOPET
`provides motivation as well as safety and health
`advice, adapted to the user and the context, by
`exploiting real-time data coming from sensors,
`knowledge elicited from a sport physiologist and a
`professional trainer, and a user model that is built
`and periodically updated through a guided autotest.
`To improve user interaction, MOPET also displays a
`3D embodied agent that speaks, suggests stretching
`or strengthening exercises according to user’s cur-
`rent condition, and demonstrates how to correctly
`perform the chosen exercises with interactive 3D
`animations.
`MOPET is designed to be used anywhere the user
`can run or walk outdoors. The user wears an heart
`rate monitor with a 3D accelerometer around her
`chest, and a PDA with a built-in GPS unit on her
`wrist. User’s parameters such as heart rate, position
`and exercising time are analyzed and visualized.
`The first time the user runs MOPET, the embodied
`agent asks for user’s gender, age, weight and height,
`then it invites the user to perform an autotest: a
`
`F. Buttussi, L. Chittaro
`
`particular exercise which consists in walking onto
`and off a step, as demonstrated by the embodied
`agent with a 3D animation. By considering the
`information provided by the user and her heart rate
`during the autotest, MOPET builds an initial user
`model. Based on the user model and the information
`acquired or derived from the sensors, MOPET sug-
`gests to increase or reduce jogging speed, provides
`advice and proposes different types of exercises,
`which are demonstrated with interactive 3D anima-
`tions.
`The paper is organized as follows. Section 2
`surveys related work on computer-aided physical
`exercise, especially focusing on mobile and wear-
`able solutions. Section 3 summarizes our previous
`work on a preliminary prototype of MOPET [10].
`Section 4 analyzes in detail how we extended that
`preliminary prototype with context-awareness and
`user-adaptation capabilities. Section 5 provides
`conclusions and outlines future research directions.
`
`2. Related work
`
`2.1. Indoor applications based on
`embodied agents
`
`Philips Virtual Coach [1] was one of the first projects
`to employ an embodied agent which acts as a per-
`sonal trainer to motivate the user. The system is
`meant to be used at home with a stationary exercise
`bike. A 2D embodied agent is projected on a screen,
`which also shows a virtual environment representing
`an open-air landscape. With a study on 24 users, the
`authors showed that the embodied agent lowered
`perceived pressure and tension, while the virtual
`environment offered fun and had a beneficial effect
`on motivation. However, the embodied agent was
`not as effective as authors expected, but this may
`be due to the information provided by the agent
`rather than the agent itself. Indeed, the system
`provides the user only with information about her
`heart rate, instead of motivating her by reporting
`the calories she burnt or speaking about other ben-
`efits of physical activity.
`EyeToy: Kinetic [2] is an indoor fitness training
`system for the Playstation 2 and exploits an EyeToy
`camera, i.e. a cheap webcam-like device, which
`detects user’s movements. The application allows
`the user to choose between a male or female personal
`trainer and creates an individual 12-week plan, tak-
`ing into account user’s height, weight, age, famil-
`iarity with EyeToy games and physical condition (by
`means of a short questionnaire). The application
`adopts a game style, presenting martial arts, Tai
`Chi and cardio exercises as entertainment. During
`
`Petitioner Apple Inc. – Ex. 1026, p. 154
`
`

`

`Wearable system for fitness training
`
`155
`
`the games, user position is monitored to determine
`her score and give her suggestions on how to perform
`the exercise better. The personal trainer is a 3D
`embodied agent that comments on the game, daily,
`weekly and monthly performance giving the user an
`‘‘E’’ to ‘‘A+’’ mark and congratulating her for the
`results or encouraging her to keep training and
`further improve. However, since EyeToy: Kinetic uses
`a single simple camera, it can detect only movements
`on a 2D plane, severely limiting the actions users can
`do. Moreover, since it does not consider heart rate, it
`cannot detect if the user is exercising at the correct
`intensity.
`
`2.2. Wearable applications
`
`The two solutions described in the previous section
`are both meant for indoor use. Therefore, they do
`not allow one to get the benefits of exercising in
`outdoor natural environments. To support people in
`open-air physical activities,
`some researchers
`[3,8,9] and companies [4—7] proposed wearable
`sensors, such as heart rate monitors and ped-
`ometers, and mobile applications for notebooks,
`PDAs and smartphones.
`To monitor user’s physiological parameters (e.g.,
`heart rate and temperature) during physical activ-
`ities, Knight et al. [3] proposed the SensVest, a
`wearable device integrated in a shirt that can mea-
`sure user’s heart rate, body temperature and accel-
`eration and send them to a remote computer. This
`device focuses on sensing aspects and does not come
`with analysis or training applications that could run
`on a mobile device.
`Polar heart rate monitors [4] are commercial wear-
`able devices that consist in a wrist-worn watch unit
`and a chest-worn heart rate sensor. Besides measur-
`ing heart rate and deriving other parameters, such as
`burnt calories, Polar devices can give basic motiva-
`tional feedback, such as ‘‘calorie bullets’’, i.e. beeps
`that occur every time a certain amount of calories is
`burnt, inciting the user to keep running and burn
`other calories. After a training session, some Polar
`devices allow the user to transfer her data to a PC or to
`send them to a Polar web site for further analysis. One
`device is able to send data via infrared to Nokia 5140
`phones provided with a Java application that allows
`the user to plan and keep track of her physical activ-
`ities. However, since heart rate data can be sent only
`after the user has completed the training session,
`real-time data analysis is not possible.
`Nike+ iPod Sport Kit [5] consists in a pedometer
`that fits into special Nike shoes and in a receiver that
`is connected to iPods. The iPod can be worn using
`special Nike T-shirts and the personal training soft-
`ware can provide information on distance and speed
`
`while listening to music. Since it relies on steps and
`elapsed time, the system can incite the user in
`running for a distance or a period that can fit a plan
`based upon her goals and her previous performance,
`while monitoring of physiological parameters is not
`supported.
`Unlike the previously discussed systems, Suunto
`t4 [6] provides a function (Coach) that monitors and
`makes suggestions about adjustments in user’s
`workout routine. It follows the American College
`of Sports Medicine [11] guidelines to plan the opti-
`mal intensity and duration of the next workout,
`adapting planned exercise length to user’s perfor-
`mance and maintaining an up-to-date plan. If user’s
`workout exertion is above or below the optimal
`level, the system adjusts the suggested intensity
`and duration of the next workout to compensate for
`the difference. The device provides information
`about heart rate, burnt calories, speed and dis-
`tance, along with an estimation on how the workout
`improves user’s aerobic fitness, but, unfortunately,
`this information is only displayed with numbers, text
`and bar charts on a watch-like unit.
`Mobile graphical analysis of user’s parameters,
`along with training plans and 2D illustration of
`exercises are supported by VidaOne MySportTraining
`software for PDAs [7]. The software can acquire data
`in real-time from a GPS, but unfortunately heart
`rate data can be acquired only via infrared after the
`training session. Therefore, advice and motivation
`during the physical activity cannot be provided.
`Oliver and Flores-Mangas [8] proposed MPTrain, a
`smartphone-based trainer that analyzes heart rate
`and acceleration data to select and change one’s
`favorite music. By choosing music with a specific
`rhythm, MPTrain encourages the user to speed up,
`slow down or keep the pace according to her training
`goals.
`Personal wellness coach [9] is another system
`that tracks user’s movement, monitors heart rate,
`and provides music feedback. This wearable system
`can send the data produced by an heart rate moni-
`tor, an accelerometer and a body temperature sen-
`sor to a laptop that can be up to 9 m away. Beside
`providing music feedback, the system can warn of
`overexertion and motivate the user with interactive
`audio. Anyway, the need for a laptop limits the
`wearability of personal wellness coach. As a result,
`mobile physical activities, such as outdoor running
`and exercising on fitness trails become impractical.
`
`3. Preliminary prototype of MOPET
`
`A preliminary simple prototype of MOPET we devel-
`oped at the beginning of our project was based on a
`
`Petitioner Apple Inc. – Ex. 1026, p. 155
`
`

`

`156
`
`F. Buttussi, L. Chittaro
`
`doubts, and (ii) animations require much less space
`than videos on the mobile device.
`
`3.1. Navigation
`
`MOPET displays a location-aware map of the trail
`on the screen of the PDA. User position on the map
`is marked with an icon depicting a running person.
`Other icons are used to mark checkpoints: the
`start—finish (a chequered flag), the fitness trail
`exercise stations (a person performing an exer-
`cise), the points where the trail forks (a compass)
`and additional points where MOPET tells the user
`her speed (a red triangular flag). Moreover, the
`trail
`is marked with a polygonal
`line which is
`initially blue. MOPET provides common naviga-
`tional cues, such as changing the user’s position
`in the map based on GPS data and changing the
`color of the polygonal lines to indicate the com-
`pleted parts of the trail. Fig. 2 shows the map after
`the user has completed the left half of the trail.
`However, this graphical feedback can be conveni-
`ently examined only by a user who is not running,
`so we provide the user also with audio information:
`when she approaches a fork, MOPET gives her vocal
`directions using the internal speaker of the PDA or
`a Bluetooth earphone.
`
`Figure 2 Map with the left half of the trail completed.
`
`The embodied agent is demonstrating a typical
`Figure 1
`exercise with rings on a fitness trail.
`
`PocketPC connected to a GPS device and was meant
`to guide users in fitness trails, i.e. trails where the
`user has to alternate jogging and exercising. The
`user runs along an indicated path and has to stop
`when she arrives at exercise stations. In each exer-
`cise station, the user finds an exercise tool to per-
`form a specific fitness exercise. The prototype
`includes an embodied agent (Fig. 1) and helps users
`in three ways:
` Navigation: location-aware audio and visual navi-
`gation instructions are provided to allow the user
`to follow the correct path in the fitness trail.
` Motivation: audio and visual feedback on user’s
`speed is provided. This is meant to motivate the
`user to maintain an adequate speed during the
`entire session.
` Training: when the user reaches an exercise sta-
`tion, the embodied agent is animated in 3D to
`show how to correctly perform the exercise.
`
`As an alternative to the embodied agent, one
`could display videos of a real trainer performing the
`exercises on the mobile device, but using 3D anima-
`tions has two main advantages over pre-recorded
`videos: (i) 3D animations can be interactively
`explored by the user, who can easily watch the
`exercise from the desired positions to clarify her
`
`Petitioner Apple Inc. – Ex. 1026, p. 156
`
`

`

`Wearable system for fitness training
`
`3.2. Motivating the user
`
`MOPET motivates the user, by exploiting graphics as
`well as audio. The application calculates average
`user’s speed on the different parts of the trail. We
`divided speed into four ranges: slow walking ( < 5
`km/h), fast walking (5—8 km/h), moderate running
`(8—12 km/h) and fast running ( > 12 km/h). To pro-
`vide the user with immediate audio feedback,
`MOPET tells the user her current speed and incites
`her to increase or decrease her speed, as soon as a
`checkpoint is reached. For each speed range, dif-
`ferent pre-recorded sentences are available. Sen-
`tences are not aggressive and try to highlight
`positive aspects of user’s performance, even if
`she walks very slowly (e.g., ‘‘You are walking at a
`regular pace. If you are not tired, try to increase
`your speed.’’). We chose to incite users gently
`because the evaluation results of [1], which incites
`aggressively (e.g., ‘‘Your heart rate is slow! Run
`faster!’’), were not as positive as expected. The
`user can also get visual feedback about her speed
`during the entire session by checking the color of the
`lines corresponding to the different parts of the
`trail, since they map speed into a blue—red tem-
`perature scale.
`
`3.3. Training
`
`In fitness trails, exercises are usually explained by
`graphic plates in the stations (see Fig. 3 for an
`example). These plates are often difficult to under-
`stand and exercises could thus be performed impro-
`perly, wasting the benefits of the physical activity
`and also risking injuries.
`
`Figure 3 Graphic plate of a fitness trail exercise.
`
`157
`
`Therefore, MOPET gives location-aware exercise
`demonstrations and explanations on how to perform
`the exercises correctly and safely: as the user
`approaches a fitness trail exercise station, the
`embodied agent first whistles to attract user’s
`attention and invites the user to look at the PDA
`display, then it demonstrates how to correctly per-
`form the exercise with a 3D animation (for example,
`Fig. 1 refers to the demonstration of an exercise
`with rings).
`We evaluated navigation, motivation and train-
`ing support provided by MOPET on 12 users. GPS
`logs, questionnaires and videos of users’ perfor-
`mance were analyzed, showing that MOPET is
`more useful than fitness trail maps for helping
`users to orient themselves in a fitness trail. MOPET
`is also much more effective than metal plates for
`learning how to correctly perform exercises. The
`mean of users’ ratings for motivation support was
`3.33 on a five-value Likert scale. This was partly
`due to the very limited personalization capabil-
`ities of the training system due to the absence of a
`user model (e.g., we used general values for speed
`thresholds, without considering the particular
`user’s weight, age and so forth) and to context-
`awareness relying only on GPS data. The evalua-
`tion of the first prototype of MOPET is described in
`detail in [10].
`
`4. The new MOPET: context-
`awareness and user-adaptation
`extensions
`
`Starting from the analysis of the limitations of the
`first prototype of MOPET and the suggestions pro-
`vided by the users, we extended the system in
`different directions, focusing on artificial intelli-
`gence, context-awareness and user-adaptation
`aspects to provide a more effective motivation
`support as well as safety and health advice.
`MOPET now offers three new personalized func-
`tionalities:
` it guides the user through the autotest described
`in Section 1, also suggesting how frequently she
`should walk onto and off the step;
` it supports jogging from a fitness exercise to
`another, by (i) visualizing information on
`speed and heart rate, (ii) providing motivational
`and safety advice, and (iii) suggesting appro-
`priate exercises
`for
`those situations where
`the user is not in a fitness trail with exercise
`stations;
` it provides advice while the user performs an
`exercise.
`
`Petitioner Apple Inc. – Ex. 1026, p. 157
`
`

`

`158
`
`F. Buttussi, L. Chittaro
`
`To acquire more information about the user, we
`added the support for a new wireless sensor, i.e. an
`heart rate monitor with a 3D accelerometer. Fig. 4
`shows the devices worn by the user: the heart rate
`monitor with the 3D accelerometer is worn on the
`user’s chest, the PDA is worn using a wristband and
`gets position data through an integrated Sirf Star III
`GPS.
`As a result, MOPET can now acquire or derive the
`following information, which constitutes the sensed
`context:
` cinematic information, i.e. user’s position, 3D
`acceleration and speed;
` physiological information, i.e. ECG, heart rate
`and burnt calories;
` time elapsed from the beginning of the training
`session or since the user started to perform an
`exercise.
`
`Besides analyzing the sensed context, MOPET
`relies on a user model, which consists of:
` personal information, i.e. weight, height, gender
`and age, which is provided by the user before
`starting the autotest;
` physiological
`information, i.e. the maximum
`volume of oxygen the user can consume in
`a minute, which is
`calculated with the
`autotest;
` user’s experience with each strengthening exer-
`cise, i.e. how many times the user completed the
`exercise keeping her heart
`rate under
`the
`required threshold, how many times she com-
`pleted the exercise with an higher heart rate,
`and how many times she quit the exercise instead
`of completing it.
`
`Figure 4 Wearing MOPET during outdoor activities.
`
`The high-level architecture of MOPET is illu-
`strated in Fig. 5 and is organized into three main
`subsystems:
` The context analyzer acquires raw data from the
`sensors and analyzes it to derive additional infor-
`mation, such as burnt calories and speed, by
`considering also information about the user
`(e.g., weight), available from the user model
`database. Collected and derived information
`about the sensed context is then provided to
`the user interface subsystem and to the training
`expert subsystem. At present, the context analy-
`zer considers GPS and heart rate data, while it
`simply logs acceleration data for future off-line
`analysis.
` The user interface visualizes speed and heart rate
`graphs, the total amount of calories burnt in the
`current training session, and the time elapsed
`since the user has started running. Moreover,
`whenever the training expert subsystem decides
`that advice, suggestions or 3D demonstrations are
`needed, the user interface retrieves the appro-
`priate media from the media database and plays
`audio or 3D animations to the user.
` The training expert considers both the informa-
`tion provided by the context analyzer and the
`information in the user model database, and
`applies the rules stored in the knowledge base
`(KB) to decide if (and which) advice is needed.
`Considering the functionality chosen by the user,
`which is provided to the training expert by the
`user interface, the training expert activates one
`of its three modules: user autotest, jogging, or
`exercise. The user autotest module, besides
`deciding if advice or motivation are needed dur-
`ing the autotest, is responsible for updating the
`user model database with the information it cal-
`culates during the autotest.
`
`The three subsystems are described in detail in
`the following subsections.
`
`4.1. Context analyzer
`
`While the user is jogging between exercises, the
`context analyzer considers her positions in a given
`time interval (currently set to 5 s) and calculates
`derived information, i.e. mean speed and calories
`burnt during the considered interval. While GPS
`data is usually accurate enough for measuring user’s
`speed, it occasionally contains highly inaccurate
`positions that should not be used to calculate user’s
`speed. Therefore, the context analyzer tries to
`detect and discard such inaccurate positions by
`calculating mean speed in each time interval as
`follows:
`
`Petitioner Apple Inc. – Ex. 1026, p. 158
`
`

`

`Wearable system for fitness training
`
`159
`
`Figure 5 MOPET architecture.
`
`1. it calculates instantaneous speeds by considering
`single pairs of subsequent GPS positions;
`2. if the instantaneous speed value for a particular
`pair of positions is physically feasible for a jogger,
`then the instantaneous speed is considered reli-
`able, otherwise it is discarded;
`3. it calculates the mean speed in the time interval
`by considering only the reliable values; if there
`are no reliable values in the time interval, the
`mean speed of the previous time interval is used.
`
`Once the context analyzer has calculated user’s
`speed in a time interval, it estimates the user’s
`energy expenditure (J) in the same interval by using
`the following formula:
`
`EnergyExpenditure ¼ Speed  Weight  Time  EC
`where Speed is the average speed in m/s in the time
`interval, Weight the user’s weight in kilograms
`retrieved from the user model (current weight is
`periodically provided or confirmed by the user
`before taking the autotest), Time the duration of
`the time interval in seconds, and EC is the energetic
`cost of jogging. This last variable is expressed in
`joules per kilogram and per meter. Users who had
`their jogging energetic cost measured in a physiol-
`ogy laboratory can enter it during the autotest,
`otherwise EC is set at 3.8, i.e. an average value
`for joggers on flat ground.
`While the joule is the standard unit for measuring
`energy in the International System (SI), it is better
`to provide the user with energy expenditure in
`calories, since people commonly use this unit.
`Therefore, the context analyzer converts the energy
`expenditure in calories before sending it to the
`other subsystems.
`Considering heart rate, the employed sensor pro-
`vides only electrocardiographic (ECG) data. This
`data can be visualized as an electrocardiogram,
`
`which might be interesting for physiologists and
`cardiologists, but it is not familiar to the intended
`users of MOPET. Therefore, the context analyzer
`analyzes ECG data and counts the local maximums
`in a time interval. Since ECG data has two local
`maximums for each heartbeat, the analysis derives
`the number of heartbeats per minute (bpm), i.e.
`user’s heart rate.
`
`4.2. User interface
`
`In designing the MOPET interface, we had to deal
`with many challenging constraints and require-
`ments. Mobile devices have several limitations in
`terms of performance, input peripherals and display
`[12], and user’s activities such as jogging or exercis-
`ing further limit the attention the user can devote to
`the interface.
`We designed an interface that can be navigated
`by using only the two softkeys and the arrow pad of
`the PDA. The user is asked to use the pen and the
`virtual keyboard only to enter her personal informa-
`tion before the autotest, but the autotest is not
`needed in each training session and, after the first
`time, the user rarely needs to change her personal
`information. To further simplify user interaction
`with MOPET, the user interface can automatically
`switch screens, e.g., after the end of an exercise or
`after the autotest, the user interface returns to the
`screen which provides information about the jogging
`activity.
`To provide suggestions, advice and accurate
`demonstrations of the exercises, we use a 3D embo-
`died agent, as mentioned in the previous sections.
`The agent follows the ISO H-Anim specifications
`[13], which standardize joints and segments of vir-
`tual human bodies. More specifically, it is compliant
`with level of articulation 2 of H-Anim: it can thus
`move 71 joints, displaying the correct position of
`all body parts, including fingers. Moreover, an
`
`Petitioner Apple Inc. – Ex. 1026, p. 159
`
`

`

`160
`
`F. Buttussi, L. Chittaro
`
`The different screens of the user interface: (a) welcome screen, (b) jogging screen, (c) personal information
`Figure 6
`screen, (d) exercise screen, (e) 3D screen and (f) autotest screen. Arrows indicate possible switches among screens.
`Arrows starting from buttons represent switches triggered by the user, the other arrows represent switches triggered by
`the system.
`
`embodied agent which can move and speak may
`attract users’ attention and convey conversational
`and emotional cues [14,15], which is useful for an
`effective users’ motivation. To model animations of
`3D embodied agents and display them on mobile
`devices, we created a specific software called MAge-
`AniM [16].
`Fig. 6 illustrates the different screens of the user
`interface and the possible switches among screens:
`when the user starts MOPET, a welcome message is
`displayed (Fig. 6a); the user can press one of the two
`softkeys to watch an introductory 3D animation
`about system functionalities (Fig. 6e) or immedi-
`ately get information about burnt calories and
`elapsed time along with speed and heart rate graphs
`about the last minute of activity in the jogging
`screen (Fig. 6b). If the user chooses to watch the
`introductory 3D animation, the jogging screen is
`automatically displayed at the end of the introduc-
`tion.
`In the jogging screen, the two softkeys allow the
`user to start a fitness exercise or the autotest. In the
`first case, the user interface asks the training
`expert subsystem to choose an exercise that is
`appropriate for the user and the sensed context
`(see Section 4.3.3); then it provides the user with an
`interactive 3D animation of the embodied agent
`
`that demonstrates how to perform the exercise
`correctly and safely. To view the correct move-
`ments under different viewpoints, the user can
`use the navigation pad: left and right keys rotate
`the embodied agent, while up and down keys get
`closer or farther from it. At the end of the demon-
`stration, the user interface displays a message
`which invites the user to start the exercise
`(Fig. 6d). During the exercise, the user interface
`plays voice messages which provide information on
`how many times the exercise should be repeated
`and suggest a correct rhythm.
`If the user chooses the autotest, the user inter-
`face plays a voice message which introduces the
`autotest exercise, then it displays a form (Fig. 6c) to
`collect or update user’s personal information (i.e.
`gender, age, height and weight) and the height of
`the step that will be used for the test. After the user
`completes the form, the information is sent to the
`training expert, and then the embodied agent
`demonstrating how to perform the test is displayed.
`After the 3D animation, the user is invited to per-
`form the test herself (Fig. 6f) following the sugges-
`tions and the advice of the training expert (see
`Section 4.3.1). At the end of the autotest, the
`interface provides the user with the results of the
`test, then it switches to the jogging screen.
`
`Petitioner Apple Inc. – Ex. 1026, p. 160
`
`

`

`Wearable system for fitness training
`
`4.3. Training expert
`
`The training expert takes decisions by considering
`the sensed context, the information stored in the
`user model database, and the functionality required
`by the user through the user interface.
`As shown in Fig. 5, the training expert is orga-
`nized into three modules (user autotest, jogging,
`and exercise) which are respectively devoted to the
`user autotest, the jogging activity and the physical
`exercises. In the following, we examine each of
`them in detail.
`
`4.3.1. The user autotest module
`The autotest allows to estimate the maximum
`volume of oxygen (VO2Max) the user can consume
`in 1 min. The User Autotest module exploits known
`physiological equations which involve user’s heart
`rate (HeartRate), the power produced by the user
`during the exercise (Power), and some coefficients
`which vary with user’s gender and age. HeartRate
`and Power have to be managed carefully, since they
`can vary during the exercise. Moreover, to obtain a
`valid estimation of VO2Max, HeartRate should be
`nearly constant for some minutes and it should be
`inside a particular range.
`Therefore, we use a context-aware strategy to
`determine the power which keeps user’s heart rate
`inside the range required by the autotest. Power can
`be calculated by using well-known physics equa-
`tions. In particular, for the autotest with the step:
`
`Power ¼ Weight  g  StepHeight
`
`TimePerStep
`
`where Weight is the user’s weight, g the gravity
`acceleration, StepHeight the height of the step, and
`TimePerStep is the time required to walk onto or off
`the step. Since Weight, g and StepHeight are con-
`stants, the user autotest module should try different
`values of TimePerStep until user’s heart rate is
`inside the required range. The idea is to start with
`a TimePerStep value calculated for a safe value of
`Power and then increase or decrease TimePerStep
`by considering the difference between current
`heart rate and the needed one. TimePerStep values
`are sent to the user interface, which plays a voice
`message saying ‘‘Up!’’ or ‘‘Down!’’ every TimePer-
`Step seconds to pace the exercise intensity.
`As a result, considering the same step with a
`given height, an overweight user may have an high
`heart rate even

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket