throbber

`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_________________
`
`
`APPLE INC.,
`Petitioner
`
`v.
`
`FACETOFACE BIOMETRICS, INC.,
`Patent Owner
`_________________
`
`
`Inter Partes Review Case No. IPR2023-00833
`U.S. Patent No. 11,042,623
`_________________
`
`
`
`DECLARATION OF BENJAMIN B. BEDERSON
`
`IPR2022-00833
`Apple EX1003 Page 1
`
`

`

`I.
`
`TABLE OF CONTENTS
`
`
`INTRODUCTION ...................................................................................... 10
`A.
`BACKGROUND AND QUALIFICATIONS .................................................. 10
`II. LEGAL FRAMEWORK ........................................................................... 19
`III. OVERVIEW AND LEGAL STANDARDS ............................................. 26
`A.
`LEVEL OF A PERSON OF ORDINARY SKILL IN THE ART ........................ 26
`B.
`CLAIM CONSTRUCTION ....................................................................... 29
`C.
`CLAIMED INVENTION IN THE ’623 PATENT .......................................... 29
`D. OVERVIEW OF THE TECHNOLOGY ............................................ 34
`IV. OVERVIEW OF THE PRIOR ART ........................................................ 39
`A. OVERVIEW OF CORAZZA ..................................................................... 39
`V. GROUND I – CORAZZA DISCLOSES ALL THE
`LIMITATIONS OF CLAIMS 1, 4-6, 8, 11, AND 14-20 .......................... 46
`A.
`CLAIM 1 .............................................................................................. 46
`1(Pre) A computer device comprising at least one processor in
`communication with at least one memory device, wherein the at
`least one processor is programmed ................................................... 46
`1(a) receive a selection of an emoticon .............................................. 47
`1(b) monitor a sensor feed provided by one or more sensors of
`the computer device to detect a plurality of human facial
`expression states; ............................................................................... 48
`1(c) automatically generate a dynamic emoticon that simulates
`the detected plurality of human expression states on the
`selected emoticon based on the sensor feed of the plurality of
`human facial expression states; and .................................................. 49
`1(d) route a message with the dynamic emoticon to a second
`computer device. ................................................................................ 50
`CLAIM 4 .............................................................................................. 51
`4. The computer device of claim 1, wherein the detected
`
`B.
`
`IPR2022-00833
`Apple EX1003 Page 2
`
`

`

`C.
`
`D.
`
`E.
`
`F.
`
`G.
`
`plurality of human facial expression states includes a smile, a
`laugh, a grimace, a frown, a pout, or any combination thereof. ....... 51
`CLAIM 5 .............................................................................................. 51
`5. The computer device of claim 1, wherein the detected
`plurality of human facial expression states includes an
`expression corresponding to a mood or an emotion, a micro
`expression, a stimulated expression, a neutralized expression, a
`masked expression, or any combination thereof. ............................... 51
`CLAIM 6 .............................................................................................. 52
`6. The computer device of claim 1, where the at least one
`processor is further programmed to analyze a human facial
`expression using an expression recognition process to detect
`the human facial expression state. ..................................................... 52
`CLAIM 8 .............................................................................................. 53
`8. The computer device of claim 1, wherein a human facial
`expression is captured in real-time from the one or more
`sensors. ............................................................................................... 53
`CLAIM 11 ............................................................................................ 53
`11(Pre) A computer-implemented method of operating a
`messaging application, the method comprising: ................................ 53
`11(a) receiving a selection of an emoticon; ....................................... 54
`11(b) monitoring a sensor feed provided by one or more
`sensors of a computer device to detect a plurality of human
`facial expression states; ..................................................................... 54
`11(c) automatically generating a dynamic emoticon that
`simulates the detected plurality of human facial expression
`states on the selected emoticon based on the sensor feed of the
`plurality of human facial expression states; and ............................... 54
`11(d) route a message with the dynamic emoticon to a second
`computer device. ................................................................................ 54
`CLAIM 14 ............................................................................................ 54
`14. The computer-implemented method of claim 11, wherein the
`detected plurality of human facial expression states includes a
`IPR2022-00833
`Apple EX1003 Page 3
`
`

`

`H.
`
`I.
`
`J.
`
`K.
`
`L.
`
`smile, a laugh, a grimace, a frown, a pout, or any combination
`thereof. ............................................................................................... 54
`CLAIM 15 ............................................................................................ 54
`15. The computer-implemented method of claim 11, wherein the
`detected plurality of human facial expression states includes an
`expression corresponding to a mood or an emotion, a micro
`expression, a stimulated expression, a neutralized expression, a
`masked expression, or any combination thereof. ............................... 54
`CLAIM 16 ............................................................................................ 55
`16. The computer-implemented method of claim 11, further
`comprising analyzing a human facial expression using an
`expression recognition process to detect the human facial
`expression state. ................................................................................. 55
`CLAIM 17 ............................................................................................ 55
`17. The computer-implemented method of claim 11, wherein a
`human facial expression is captured in real-time from the one
`or more sensors. ................................................................................. 55
`CLAIM 18 ............................................................................................ 55
`18. The computer-implemented method of claim 11, further
`comprising continuously detecting human facial expression
`states using one or more sensors of the computer device after
`detecting a first human facial expression state. ................................. 55
`CLAIM 19 ............................................................................................ 56
`19(Pre) At least one non-transitory computer-readable storage
`media having computer-executable instructions embodied
`thereon, wherein when executed by a computer device having at
`least one processor in communication with at least one memory
`device, the computer-executable instructions cause the
`processor to: ....................................................................................... 56
`19(a) receive a selection of an emoticon; .......................................... 57
`19(b) monitor a sensor feed provided by one or more sensors of
`the computer device to detect a plurality of human facial
`expression state; ................................................................................. 57
`
`IPR2022-00833
`Apple EX1003 Page 4
`
`

`

`19(c) automatically generate a dynamic emoticon that displays
`the selected emoticon changing to the plurality of detected
`human facial expression states based on the sensor feed of the
`plurality of human facial expression states; and ............................... 57
`19(d) route a message with the dynamic emoticon to a second
`computer device, ................................................................................ 57
`19(e) wherein the second computer device displays the selected
`emoticon changing to the detected plurality of human facial
`expression states. ................................................................................ 57
`M. CLAIM 20 ............................................................................................ 57
`20. The at least one non-transitory computer-readable storage
`media of claim 19, wherein the detected plurality of human
`facial expression states includes a smile, a laugh, a grimace, a
`frown, a pout, or any combination thereof. ........................................ 57
`VI. GROUND II – CORAZZA IN VIEW OF BROWN TEACH ALL
`THE LIMITATIONS OF CLAIMS 1-2, 4-8, 11-12, AND 14-20 ............ 58
`A. OVERVIEW OF BROWN ......................................................................... 58
`B.
`CLAIM 1 .............................................................................................. 62
`1(Pre) A computer device comprising at least one processor in
`communication with at least one memory device, wherein the at
`least one processor is programmed to ............................................... 62
`1(a) receive a selection of an emoticon .............................................. 62
`1(b) monitor a sensor feed provided by one or more sensors of
`the computer device to detect a plurality of human facial
`expression states; ............................................................................... 68
`1(c) automatically generating a dynamic emoticon that
`simulates the detected plurality of human facial expression
`states on the selected emoticon based on the sensor feed of the
`plurality of human facial expression states; and ............................... 69
`1(d) route a message with the dynamic emoticon to a second
`computer device. ................................................................................ 69
`CLAIM 2 .............................................................................................. 72
`2. The computer device of claim 1, wherein a messaging
`IPR2022-00833
`Apple EX1003 Page 5
`
`C.
`
`

`

`D.
`
`E.
`
`F.
`
`G.
`
`H.
`
`I.
`
`interface is used to compose the message, and wherein the
`dynamic emoticon is embedded as part of message. .......................... 72
`CLAIM 4 .............................................................................................. 72
`4. The computer device of claim 1, wherein the detected
`plurality of human facial expression states includes a smile, a
`laugh, a grimace, a frown, a pout, or any combination thereof. ....... 72
`CLAIM 5 .............................................................................................. 72
`5. The computer device of claim 1, wherein the detected
`plurality of human facial expression states includes an
`expression corresponding to a mood or an emotion, a micro
`expression, a stimulated expression, a neutralized expression, a
`masked expression, or any combination thereof. ............................... 72
`CLAIM 6 .............................................................................................. 72
`6. The computer device of claim 1, where the at least one
`processor is further programmed to analyze a human facial
`expression using an expression recognition process to detect
`the human facial expression state. ..................................................... 72
`CLAIM 7 .............................................................................................. 73
`7. The computer device of claim 1, wherein the dynamic
`emoticon is embedded at a displayable portion of the message. ....... 73
`CLAIM 8 .............................................................................................. 73
`8. The computer device of claim 1, wherein a human facial
`expression is captured in real-time from the one or more
`sensors. ............................................................................................... 73
`CLAIM 11 ............................................................................................ 73
`11(Pre) A computer-implemented method of operating a
`messaging application, the method comprising: ................................ 73
`11(a) receiving a selection of an emoticon; ....................................... 73
`11(b) monitoring a sensor feed provided by one or more
`sensors of a computer device to detect a plurality of human
`facial expression states; ..................................................................... 73
`11(c) automatically generating a dynamic emoticon that
`
`IPR2022-00833
`Apple EX1003 Page 6
`
`

`

`J.
`
`K.
`
`L.
`
`simulates the detected plurality of human facial expression
`states on the selected emoticon based on the sensor feed of the
`plurality of human facial expression states; and ............................... 74
`11(d) route a message with the dynamic emoticon to a second
`computer device. ................................................................................ 74
`CLAIM 12 ............................................................................................ 74
`12. The computer-implemented method of claim 11, wherein a
`messaging interface is used to compose the message, and
`wherein the dynamic emoticon is embedded as part of the
`message. ............................................................................................. 74
`CLAIM 14 ............................................................................................ 74
`14. The computer-implemented method of claim 11, wherein the
`detected plurality of human facial expression states includes a
`smile, a laugh, a grimace, a frown, a pout, or any combination
`thereof. ............................................................................................... 74
`CLAIM 15 ............................................................................................ 75
`15. The computer-implemented method of claim 11, wherein the
`detected plurality of human facial expression states includes an
`expression corresponding to a mood or an emotion, a micro
`expression, a stimulated expression, a neutralized expression, a
`masked expression, or any combination thereof. ............................... 75
`M. CLAIM 16 ............................................................................................ 75
`16. The computer-implemented method of claim 11, further
`comprising analyzing a human facial expression using an
`expression recognition process to detect the human facial
`expression state. ................................................................................. 75
`CLAIM 17 ............................................................................................ 75
`17. The computer-implemented method of claim 11, wherein a
`human facial expression is captured in real-time from the one
`or more sensors. ................................................................................. 75
`CLAIM 18 ............................................................................................ 75
`18. The computer-implemented method of claim 11, further
`comprising continuously detecting human facial expression
`
`N.
`
`O.
`
`IPR2022-00833
`Apple EX1003 Page 7
`
`

`

`P.
`
`states using one or more sensors of the computer device after
`detecting a first human facial expression state. ................................. 75
`CLAIM 19 ............................................................................................ 76
`19(Pre) At least one non-transitory computer-readable storage
`media having computer-executable instructions embodied
`thereon, wherein when executed by a computer device having at
`least one processor in communication with at least one memory
`device, the computer-executable instructions cause the
`processor to: ....................................................................................... 76
`19(a) receive a selection of an emoticon; .......................................... 76
`19(b) monitor a sensor feed provided by one or more sensors of
`the computer device to detect a plurality of human facial
`expression state; ................................................................................. 76
`19(c) automatically generate a dynamic emoticon that displays
`the selected emoticon changing to the plurality of detected
`human facial expression states based on the sensor feed of the
`plurality of human facial expression states; and ............................... 76
`19(d) route a message with the dynamic emoticon to a second
`computer device, ................................................................................ 77
`19(e) wherein the second computer device displays the selected
`emoticon changing to the detected plurality of human facial
`expression states. ................................................................................ 77
`CLAIM 20 ............................................................................................ 77
`20. The at least one non-transitory computer-readable storage
`media of claim 19, wherein the detected plurality of human
`facial expression states includes a smile, a laugh, a grimace, a
`frown, a pout, or any combination thereof. ........................................ 77
`VII. GROUND III – CORAZZA IN VIEW OF BROWN AND GATES
`TEACHES ALL THE LIMITATIONS OF CLAIMS 3, 9-10, AND
`13 .................................................................................................................. 77
`A. OVERVIEW OF GATES .......................................................................... 78
`B.
`CLAIM 3 .............................................................................................. 84
`3. The computer device of claim 1, wherein the at least one
`
`Q.
`
`IPR2022-00833
`Apple EX1003 Page 8
`
`

`

`C.
`
`D.
`
`processor is further programmed to detect a facial profile and
`to match the facial profile against a known facial profile
`utilizing a facial recognition process to authenticate an
`operating user. ................................................................................... 84
`CLAIM 9 .............................................................................................. 88
`9. The computer device of claim 1, wherein the at least one
`processor is further programmed to detect a plurality of human
`facial expression states on a periodic basis based on a
`predetermined period of time. ............................................................ 88
`CLAIM 10 ............................................................................................ 93
`10. The computer device of claim 1, wherein the sensor feed is
`analyzed for an expression recognition process and a biometric
`recognition process. ........................................................................... 93
`CLAIM 13 ............................................................................................ 95
`13. The computer-implemented method of claim 11, further
`comprising detecting a facial profile and matching the facial
`profile against a known facial profile utilizing a facial
`recognition process to authenticate an operating user. ..................... 95
`VIII. CONCLUSION ........................................................................................... 95
`
`
`E.
`
`IPR2022-00833
`Apple EX1003 Page 9
`
`

`

`I, Benjamin B. Bederson, hereby declare the following:
`
`I.
`
`INTRODUCTION
`1. My name is Benjamin B. Bederson, and I am over 21 years of age and
`
`otherwise competent to make this Declaration. I make this Declaration based on facts
`
`and matters within my own knowledge and on information provided to me by others,
`
`and, if called as a witness, I would competently testify to the matters set forth herein.
`
`I have been retained by counsel for the Petitioner Apple Inc. (“Apple” or
`
`“Petitioner”) to provide my independent opinions on certain issues requested by
`
`Counsel for Petitioner relating to the accompanying Petition for Inter Partes Review
`
`of U.S. Patent No. 11,042,623 (“the ’623 Patent”). I understand that the Challenged
`
`Claims are claims 1-20. My opinions are limited to those Challenged Claims.
`
`2. My compensation in this matter is not based on the substance of my
`
`opinions or the outcome of this matter. I have no financial interest in Petitioner. I am
`
`being compensated at an hourly rate of $600 for my analysis and testimony in this
`
`case.
`
`A. Background and Qualifications
`3.
`I have summarized in this section my educational background, career
`
`history, and other qualifications relevant to this matter. I have also included a current
`
`version of my curriculum vitae attached herein as Appendix A.
`
`4.
`
`I received a Bachelor of Science degree in Computer Science with a
`
`IPR2022-00833
`Apple EX1003 Page 10
`
`

`

`minor in Electrical Engineering from Rensselaer Polytechnic Institute (“RPI”) in
`
`1986. I received a Master of Science degree and a Ph.D. in Computer Science from
`
`New York University (“NYU”) in 1989 and 1992, respectively.
`
`5.
`
`Since 1998, I have been a Professor of Computer Science at the
`
`University of Maryland (“UMD”), where I have joint appointments at the Institute
`
`for Advanced Computer Studies and the College of Information Studies (Maryland’s
`
`“iSchool”) and am currently Professor Emeritus. I was also Associate Provost of
`
`Learning Initiatives and Executive Director of the Teaching and Learning
`
`Transformation Center from 2014 to 2018. I am a member and previous director of
`
`the Human-Computer Interaction Lab (“HCIL”), the oldest and one of the best
`
`known Human-Computer Interaction research groups in the country. Last year, I
`
`co-founded the J.S. Bryant School, a therapeutic high school to open to students in
`
`2025. I was also co-founder and Chief Scientist of Zumobi, Inc. from 2006 to 2014,
`
`a Seattle-based startup that is a publisher of content applications and advertising
`
`platforms for smartphones. I am also co-founder and co-director of the International
`
`Children’s Digital Library (“ICDL”), a web site launched in 2002 that provides the
`
`world’s largest collection of freely available online children’s books from around
`
`the world with an interface aimed to make it easy for children and adults to search
`
`and read children’s books online. I was also co-founder and Chief Technology
`
`Officer of Hazel Analytics from 2014-2023, a data analytics company whose
`
`IPR2022-00833
`Apple EX1003 Page 11
`
`

`

`product sends alerts in warranted circumstances. In addition, I have for more than
`
`15 years consulted for numerous companies in the area of user interfaces, including
`
`EPAM, Hillcrest Labs, Lockheed Martin, Logitech, Microsoft, NASA Goddard Space
`
`Flight Center, the Palo Alto Research Center, and Sony.
`
`6.
`
`For more than 30 years, I have studied, designed, and worked in the field
`
`of computer science and human-computer interaction. My experience includes 30
`
`years of teaching and research, with research interests in human-computer
`
`interaction and the software and technology underlying today’s interactive
`
`computing systems. This includes the design and implementation of software
`
`applications on mobile devices, including smart phones and PDAs, such as my work
`
`on DateLens, LaunchTile, and StoryKit described below.
`
`7. At UMD, my research is in the area of Human-Computer Interaction
`
`(“HCI”), a field that relates to the development and understanding of computing
`
`systems to serve users’ needs. Researchers in this field are focused on making
`
`universally usable, useful, efficient, and appealing systems to support people in
`
`their wide range of activities. My approach is to balance the development of
`
`innovative technology that serves people’s practical needs. Example systems
`
`following this approach that I have built include PhotoMesa (software for end
`
`users to browse personal photos), DateLens (2002 software for end users to use
`
`their mobile devices efficiently access their calendar information), LaunchTile
`
`IPR2022-00833
`Apple EX1003 Page 12
`
`

`

`(2005 “home screen” software for mobile devices to allow users to navigate apps
`
`in a zoomable environment), SpaceTree (2001 software for end users to efficiently
`
`browse very large hierarchies), ICDL (International Children’s Digital Library I
`
`co-founded in 2001), and StoryKit (a 2009 iPhone app for children to create
`
`stories).
`
`8.
`
`LaunchTile led to my creation of Zumobi in 2006, where I was
`
`responsible for investigating new software platforms and developing new user
`
`interface designs that provide efficient and engaging interfaces to permit end users
`
`to access a wide range of content on mobile platforms (including the iPhone and
`
`Android-based devices). For example, I designed and implemented software called
`
`“Ziibii,” a “river” of news for iPhone, software called “ZoomCanvas,” a zoomable
`
`user interface for several iPhone apps, and iPhone apps including “Inside Xbox” for
`
`Microsoft and Snow Report for REI. At the International Children’s Digital Library
`
`(ICDL), I have since 2002 been the technical director responsible for the design and
`
`implementation of
`
`the web site, www.childrenslibrary.org
`
`(originally at
`
`www.icdlbooks.org). In particular, I have been closely involved in designing the
`
`user interfaces as well as the software architecture for the web site since its inception
`
`in 2002. I developed a mobile version of the ICDL for iPhone in 2007 that was
`
`publicly available in Apple’s App Store. I then developed a mobile app called
`
`StoryKit that allowed children to create and tell stories using an iPhone and its touch
`
`IPR2022-00833
`Apple EX1003 Page 13
`
`

`

`screen. It was publicly available in Apple’s App Store starting in 2009.
`
`9. Beginning in the mid-1990s, I have been responsible for the design and
`
`implementation of numerous other web sites in addition to the ICDL. For example,
`
`I designed and built my own professional web site when I was an Assistant Professor
`
`of Computer Science at the University of New Mexico in 1995 and have continued
`
`to design, write the code for, and update both that site (which I moved to the
`
`University of Maryland in 1998, currently at http://www.cs.umd.edu/~bederson/) as
`
`well
`
`as
`
`numerous
`
`project
`
`web
`
`sites,
`
`such
`
`as
`
`Pad++,
`
`http://www.cs.umd.edu/hcil/pad++/. I received the Janet Fabri Memorial Award for
`
`Outstanding Doctoral Dissertation for my Ph.D. work in robotics and computer
`
`vision. I have combined my hardware and software skills throughout my career in
`
`Human-Computer Interaction research, building various interactive electrical and
`
`mechanical systems that couple with software to provide an innovative user
`
`experience.
`
`10. One of my projects involved image processing as it related to personal
`photo management. For example, I wrote a 2003 paper1 describing the use of image
`
`processing to analyze image content and automatically crop away portions of the
`
`
`1 Bongwon Suh, Haibin Ling, Benjamin B. Bederson, and David W. Jacobs. 2003.
`Automatic thumbnail cropping and its effectiveness. In Proceedings of the 16th
`annual ACM symposium on User interface software and technology (UIST ’03).
`Association for Computing Machinery, New York, NY, USA, 95–104.
`https://doi.org/10.1145/964696.964707
`
`IPR2022-00833
`Apple EX1003 Page 14
`
`

`

`photo leaving the salient parts which would be easier to see in small thumbnails.
`
`11.
`
`In 2007, I wrote a paper that continued that line of work, describing
`
`further image processing to organize photos based on identifying the clothes that
`
`people were wearing to group them.
`
`
`
`
`
`12. My work has been published extensively in more than 160 technical
`
`publications, and I have given about 100 invited talks, including 9 keynote lectures.
`
`I have won numerous awards including the Brian Shackel Award for “outstanding
`
`contribution with international impact in the field of HCI” in 2007, and the Social
`IPR2022-00833
`Apple EX1003 Page 15
`
`

`

`Impact Award in 2010 from Association for Computing Machinery’s (“ACM”)
`
`Special Interest Group on Computer Human Interaction (“SIGCHI”). ACM is the
`
`primary international professional community of computer scientists, and SIGCHI
`
`is the primary international professional HCI community. I have been honored by
`
`both professional organizations. I am an “ACM Distinguished Scientist,” which
`
`“recognizes those ACM members with at least 15 years of professional experience
`
`and 5 years of continuous Professional Membership who have achieved significant
`
`accomplishments or have made a significant impact on the computing field.” I am
`
`a member of the “CHI Academy,” which is described as follows: “The CHI
`
`Academy is an honorary group of individuals who have made substantial
`
`contributions to the field of human-computer interaction. These are the principal
`
`leaders of the field, whose efforts have shaped the disciplines and/or industry and
`
`led the research and/or innovation in human-computer interaction.” The criteria for
`
`election to the CHI Academy are: cumulative contributions to the field; impact on
`
`the field through development of new research directions and/or innovations; and
`
`(3) influence on the work of others.
`
`13.
`
`I have appeared on radio shows numerous times to discuss issues
`
`relating to user interface design and people’s use and frustration with common
`
`technologies, web sites, and mobile devices. My work has been discussed and I have
`
`been quoted by mainstream media around the world over 120 times, including by
`
`IPR2022-00833
`Apple EX1003 Page 16
`
`

`

`the New York Times, the Wall Street Journal, the Washington Post, Newsweek, the
`
`Seattle Post-Intelligencer, the Independent, Le Monde, NPR’s All Things
`
`Considered, New Scientist Magazine, and MIT’s Technology Review.
`
`14.
`
`I have designed, programmed, and publicly deployed dozens of user-
`
`facing software products that have cumulatively been used by millions of users. My
`
`work is cited in patents by several major companies, including Amazon, Apple,
`
`Facebook, Google, and Microsoft.
`
`15.
`
`I am the co-inventor of 14 U.S. patents and 20 U.S. patent applications
`
`which are generally directed to user interfaces/experience.
`
`16. Based on my experiences described above, and as indicated in my
`
`Curriculum Vitae, I am qualified to provide the following opinions with respect to
`
`the patents in this case. Additionally, I was at least a person having ordinary skill in
`
`the art as of the priority date of the ’623 Patent as described herein.
`
`17.
`
`In writing this declaration, I have considered my own knowledge and
`
`experience, including my work experience in the fields of computer science and
`
`human-computer interaction; my experience in teaching those subjects; my
`
`experience in developing companies and products in those subjects; and my
`
`experience in working with others involved in those fields, my experience in
`
`designing and implementing software systems, and user interfaces. In reaching my
`
`opinions in this matter, I have focused on the following references and materials:
`
`IPR2022-00833
`Apple EX1003 Page 17
`
`

`

`Description
`Exhibit
`Exhibit 1001 U.S. Patent No. 11,042,623 (“’623 Patent”)
`Exhibit 1002 File History of U.S. Patent No. 11,042,623 (“’623 File History”)
`Exhibit 1004 U.S. Patent Publication No. 2013/0235045 to Corazza et al.
`(“Corazza”)
`Exhibit 1005 U.S. Patent 8,620,850 to Brown et al. (“Brown”)
`Exhibit 1006 U.S. Patent No. 9,256,748 to Gates et al. (“Gates”)
`Exhibit 1007 FaceToFace Complaint
`Exhibit 1008 Carnegie Mellon’s website memorializing Scott Fahlman’s
`original post, http://www.cs.cmu.edu/~sef/Orig-Smiley.htm.
`Exhibit 1009 U.S. Patent Publication No. 2006/0015812 to Cunningham et al.
`(“Cunningham”)
`Exhibit 1010 U.S. Patent Publication 2004/0018858 to Nelson (“Nelson”)
`Exhibit 1011 U.S. Patent Application No. 2010/0177116 to Dahllof et al.
`(“Dahllof”)
`Exhibit 1012 U.S. Patent Application No. 2005/0195927
`(“Solonen”)
`Exhibit 1013 Eudora Email 6.2, pages 47-50
`Exhibit 1016 U.S. Patent No. 9,425,974 to Tucker et al. (“Tucker”)
`Exhibit 1017 Yu Zhong, Pierre J. Garrigues, and Jeffrey P. Bigham. 2013. Real
`Time Object Scanning Using a Mobile Phone and Cloud-based
`Visual Search Engine, Proceedings of the 15th International ACM
`SIGACCESS Conference on Computers and Accessibility
`(AS

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket