`
`IW 8079120
`
`'IO•Mffi,'IDY~~ll:OMJ'ltllrBSiEl BBJESiE~WS~ ~ €0M!E1~l
`-~ ~ -~ ' ":~-..;,,,,,;-=- -~ """"'~-~-~ -
`
`- . . ; ; ; .+ '~ -
`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and Trademark Office
`
`March 02, 2021
`
`THIS IS TO CERTIFY THAT ANNEXED IS A TRUE COPY FROM THE
`RECORDS OF THIS OFFICE OF THE FILE WRAPPER AND CONTENTS
`OF:
`
`APPLICATION NUMBER: 13/069,124
`FILING DATE: March 22, 2011
`PA TENT NUMBER: 8,463,030
`ISSUE DATE: June 11, 2013
`
`By Authority of the
`Under Secretary of Commerce for Intellectual Property
`and Director of the United States Patent and Trademark Office
`
`~ fll 1~
`
`M. TARVER
`Certifying Officer
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 1 of 1115
`
`
`
`IN THE UNITED STATES PATENT AND TRADEMARK OFFICE
`P.O. BOX 1450
`ALEXANDRIA, VA 22313-1450
`
`Application No.:
`Filed:
`Applicant:
`Title:
`Docket No.:
`Customer No.:
`
`Divisional of US 13/037317
`
`Evryx Technologies, Inc.
`Image Capture and Identification System and Process
`101044.0001US14
`24392
`
`Mail Stop Amendment
`Commissioner for Patents
`P.O. Box 1450
`Alexandria, VA 22313-1450
`
`Sir:
`
`PRELIMINARY AMENDMENT
`
`Concurrent with filing of a divisional application for pending US Application Serial No.
`
`13/037317, please amend the above-identified application as follows:
`
`Amendments to the Claimsare reflected in the amendments beginning on page 2 of this paper
`
`Amendments to the Specification are reflected in the listing of claims which begins on page 4
`of this paper.
`
`Amendments to the Drawings-/-
`
`Remarks/Arguments begin on page 6 ofthis paper.
`
`Appendix:-/-
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 2 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 2 of 1115
`
`
`
`AMENDMENTSTO THE CLAIMS
`
`This listing of claims replacesall prior versions, andlistings, of claims in the application:
`
`1-44. (Cancelled)
`
`45, (New) A transaction system comprising:
`
`a mobile device configured to acquire data related to an object;
`
`an object identification platform configured to obtain the acquired data, recognize the
`
`object as a target object based on the acquired data, and determine object
`
`information associated with the target object; and
`
`a content platform configured to obtain the object information, and initiate a transaction
`
`associated with the target object with a selected account over a network based on
`
`the object information.
`
`45. (New) The system of claim 1, wherein the mobile device is configured to operate, at least in
`
`part, as the object identification platform.
`
`46. (New) The system of claim 2, wherein the object identification platform is distributed
`
`between the mobile device and at least one remote server coupled with the mobile device via a
`
`network.
`
`47. (New) The system of claim 1, wherein a remote server coupled with the mobile device over a
`
`network is configured to operate as the object identification platform.
`
`48. (New) The system of claim 1, wherein the mobile device comprises the content platform.
`
`49. (New) The system of claim 1, wherein at least one remote server coupled with the mobile
`
`device over a network operates as the content platform.
`
`50. (New) The system of claim 1, wherein the content platform is further configured to provide
`
`content information pertinent to the target object to the mobile device based on the object
`
`information.
`
`51. (New) The system of claim 7, wherein the content information comprises video.
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 3 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 3 of 1115
`
`
`
`
`
`
`
`52. (New) The system of claim 8, wherein the content information comprises a video stream.
`
`53. (New) The system of claim 7, whercin the content information comprises audio.
`
`54. (New) The system of claim 8, wherein the audio comprises an audio recording.
`
`55. (New) The system of claim 8, wherein the audio comprises an audio stream.
`
`56. (New) The system of claim 1, wherein the transaction comprises a commercial transaction.
`
`57. (New) The system of claim 13, wherein the commercial transaction includes a purchase
`
`related to the target object.
`
`58. (New) The system of claim 14, wherein the purchase relates to at least one of the following:
`
`audio data, video data, the object, the target object, a ticket, an item on a screen, a disc, a fare,
`
`and a vending machine product.
`
`59. (New) The system of claim 1, wherein the selected account comprises an on-line account.
`
`60. (New) The system of claim 1, wherein the selected account comprises an account linked with
`
`the mobile device.
`
`61. (New) The system of claim 1, wherein the selected account comprises an accountlinked to a
`
`user of the mobile device.
`
`
`
`
`
`62. (New) The system of claim 1, wherein the selected account comprises a bank account.
`
`63. (New) The system of claim 1, wherein the selected account comprises a credit card account.
`
`64. (New) The system of claim 1, wherein the acquired data comprises an image.
`
`65. (New) The system of claim 21, wherein the acquired data comprises image data.
`
`66. (New) The system of claim 1, wherein the acquired data comprises a digital representation
`
`relating to a person.
`
`67. (New) The system of claim 23, wherein the digital representation comprises a humanface.
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 4 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 4 of 1115
`
`
`
`68. (New) The system of claim 1, wherein the acquired data comprises user identify.
`
`69. (New) The system of claim 1, wherein the acquired data comprises location of the mobile
`
`device.
`
`70. (New) The system of claim 1, wherein the acquired data comprises screen content.
`
`
`
`
`
`71. (New) The system of claim 1, wherein the acquired data comprises a user voice command.
`
`72. (New) The system of claim 1, whercin the acquired data comprises symbol content.
`
`73. (New) The system of claim 29, wherein the symbol content comprises alphanumeric data.
`
`7A. (New) The system of claim 1, wherein the object information comprises an object identity.
`
`75. (New) The system of claim 31, wherein the object identify comprises an object classification.
`
`76. (New) The system of claim 1, wherein the object information comprises an object status.
`
`77. (New) The system of claim 1, whercin the object information comprises decoded symbol
`
`information.
`
`78. (New) The system of claim 1, wherein the object information comprises an object attribute.
`
`79, (New) The system of claim 1, wherein the mobile device comprises a mobile telephone.
`
`80. (New) The system of claim 36, whercin the mobile device comprises a camera equipped
`
`mobile telephone.
`
`81. (New) The system of claim 1, wherein the mobile device comprises a vehicle.
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 5 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 5 of 1115
`
`
`
`AMENDEMENTS TO THE SPECIFICATION
`
`Priority Claim
`
`Please insert the following priority claim on line 2 of page 1 of the application as follows:
`
`This application is a divisional of 13/037317 filed February 28, 2011 whichis a divisional of
`12/333630 filed December 12, 2008 whichis a divisional of 10/492243 filed April 9, 2004 which
`
`is a National Phase of PCT/US02/35407 filed November 5, 2002 which is an International Patent
`application of 09/992942 filed November 5, 2001 which claims priority to provisional
`application number 60/317521 filed Sept. 5, 2001 and provisional application number 60/246295
`filed Nov. 6, 2000. These andall other referenced patents and applications are incorporated
`herein by reference in their entirety. Where a definition or use of a term in a reference thatis
`incorporated by reference is inconsistent or contrary to the definition of that term provided
`herein, the definition of that term provided herein is deemed to be controlling.
`
`Attorney Docket Number
`
`The attorney docket numberfor this matter is 101044.0001US14.
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 6 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 6 of 1115
`
`
`
`General Remarks
`
`REMARKS/ARGUMENTS
`
`Claims 1-44 of the copending parent application were canceled and newclaims 45-81
`
`were added. The specification was amended to make reference to the priority application, and
`
`to further comply with rules and regulations for applications with only a single figure. No new
`
`matter was entered by virtue of the amendments.
`
`The applicant believes that all claims are in condition for allowance and respectfully
`
`requests that a timely Notice of Allowance be issucd inthis case.
`
`Respectfully submitted,
`
`FISH & ASSOCIATES, PC
`
`By /Nicholas J. Witchey/
`Nicholas J. Witchey.
`Reg. No. 63481
`Tel.: (949) 943-8300
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 7 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 7 of 1115
`
`
`
`as, United States
`a2) Patent Application Publication =o) Pub. No.: US 2009/0141986 A1
`
` Boncyketal. (43) Pub. Date: Jun. 4, 2009
`
`
`US 20090141986A1
`
`(54)
`
`IMAGE CAPTURE AND IDENTIFICATION
`SYSTEM AND PROCESS
`
`Related U.S. Application Data
`
`(76)
`
`Inventors:
`
`Wayne C. Boncyk, Evergreen, CO
`(US), Ronald H. Cohen, Pasadena,
`CA (US)
`
`Correspondence Address:
`FISH & ASSOCIATES, PC
`ROBERTD. FISH
`2603 Main Street, Suite 1000
`Irvine, CA 92614-6232 (US)
`
`(21) Appl. No.:
`
`12/333,630
`
`(22)
`
`Filed:
`
`Dee. 12, 2008
`
`(62) Division of application No. 10/493,343, filed on Apr.
`22, 2004, now Pat. No. 7,162,886.
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`(2006.01)
`G06K 9/62
`(52) US. C1. ee cceecneceescteceeressetacsaneeneesene 382/209
`67)
`ABSTRACT
`A digital image ofthe object (16) is captured and the objectis
`recognized from plurality of objects in a database (20). An
`information address corresponding to the object is then used
`to access information andinitiate communication pertinent to
`the object.
`
`INPUT IMAGE
`CAPTURE
`
`
`
`
`
`
`
`
`OBJECT
`IMAGE
`
`___ SYMBOLIC
`IMAGE
`
`
`INPUT IMAGE
`DECOMPOSITION
`
` DATABASE
`MATCHING
`
`SELECT BEST
`MATCH
`
`40
`
`42
`
`URL RETURN
`
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 8 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 8 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 1 of 7
`
`US 2009/0141986 Al
`
`
`
` INPUT IMAGE
`
`CAPTURE
`
`
`INPUT IMAGE
`
`DECOMPOSITION
`34
`
`28
`
`SYMBOLIC
`IMAGE
`
`DECODE
`SYMBOL
`
`3 6
`
`DATABASE
`MATCHING
`
` 3 8
`
`SELECT BEST
`MATCH
`
`FIG. 1
`
`40
`
`42
`
`URL LOOKUP
`
`URL RETURN
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 9 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 9 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 2 of 7
`
`US 2009/0141986 Al
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 10 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 10 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 3 of 7
`
`US 2009/0141986 Al
`
`START
`
`FOR EACH INPUT IMAGE
`SEGMENT GROUP
`
`FOR EACH OBJECT IN
`DATABASE
`
`FOR EACH VIEW OFTHIS
`OBJECT
`
`FOR EACH SEGMENT
`GROUP IN THIS VIEW
`
`
`
`GREYSCALE
`COMPARISON
`
`
`
`
`COLORCUBE
`COMPARISON
`
`
`
`
`SHAPE
`WAVELET
`
`COMPARISON
`COMPARISON
`
`FIG. 3A
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 11 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 11 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 4 of 7
`
`US 2009/0141986 Al
`
`CALCULATE COMBINED
`MATCH SCORE
`
`NEXT SEGMENT GROUPIN
`THIS DATABASE VIEW
`
`NEXT VIEW OF THIS
`DATABASE OBJECT
`
`NEXT OBJECT IN
`DATABASE
`
`NEXT INPUT IMAGE
`SEGMENT GROUP
`
`FIG. 3B
`
`FINISH
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 12 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 12 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 5 of 7
`
`US 2009/0141986 Al
`
`TARGET
`OBJECT
`
`CONTENT
`SERVER
`
`102
`
`100
`
`1 11
`
`
`FIG. 4
`
`CAMERA
`
`BROWSER
`
`IMAGE
`PROCESSING
`
`TERMINAL
`
`
`
`
`
`
`TARGET
`IMAGE DATA
`
`
`OBJECT
`
`INFORMATION
`
`
`
`
` 105
`
`
`\q
`
`
`OBJECT
`
`RECOGNITION
`
`
`107
`
`
`
`
`
`106
`
`\
`DATABASE
`
`IDENTIFICATION SERVER
`
`
`
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 13 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 13 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 6 of 7
`
`US 2009/0141986 Al
`
`TARGET
`OBJECT
`
`~ CONTENT
`SERVER
`
`202
`
`200
`
`FIG. 5
`
`
`
`
`CAMERA
`
`IMAGE
`PROCESSING
`
`2 11
`
`
`
`
`
`BROWSER
`
`TERMINAL
`
`
`
`IMAGE DATA
`
`TARGET
`OBJECT
`INFORMATION
`
`205
`
`207
`
`2
`
`206
`
` \
`
`
`OBJECT
`
`
`RECOGNITION
`
`
`
`\
`
`DATABASE
`
`IDENTIFICATION SERVER
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 14 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 14 of 1115
`
`
`
`Patent Application Publication
`
`Jun. 4, 2009 Sheet 7 of 7
`
`US 2009/0141986 Al
`
`TARGET
`OBJECT
`
`SPACECRAFT
`DATA SYSTEM
`
`
`
`
`CAMERA
`
`IMAGE
`PROCESSING
`
`TERMINAL
`
`
` FIG. 6
`
`
`ou _| DATABASE
`
`
`IMAGE DATA
`
`5
`
`307
`
`309
`
`308
`
`TARGET
`OBJECT
`INFORMATION
`
`306
`
`RECOGNITION
`
`
`
`IDENTIFICATION SERVER
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 15 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 15 of 1115
`
`
`
`US 2009/0141986 Al
`
`Jun. 4, 2009
`
`IMAGE CAPTURE AND IDENTIFICATION
`SYSTEM AND PROCESS
`
`TECHNICAL FIELD
`
`[0001] The mventionrelates an identification method and
`process for objects from digitally captured images thereof
`that uses image characteristics to identify an object from a
`plurality of objects in a database.
`
`BACKGROUND ART
`
`‘Lhere is a need to provide hyperlink functionality in
`[0002]
`known objects without modification to the objects, through
`reliably detecting and identifying the objects based only on
`the appearanceof the object, and then locating and supplying
`information pertinent to the objector initiating communica-
`tions pertinent to the object by supplying an information
`address, such as a Uniform Resouree Locator (URL), perti-
`nent to the object.
`[0003] There is a need to determine the position and orien-
`tation ofknownobjects based only on imageryofthe objects.
`[0004]
`‘Lhe detection, identification, determinationofposi-
`tion and orientation, and subsequent information provision
`and communication must occur without modification or dis-
`figurement of the object, without the need for any marks,
`symbols, codes, barcodes, or characters onthe object, without
`the need to touch or disturb the object, without the need for
`special lighting other than that required for normal human
`vision, without the need for any communication device (radio
`frequency, infrared, etc.) to be attached to or nearby the
`object, and without human assistance in the identification
`process. The objects to be detected and identified may be
`3-dimensional objects, 2-dimensional
`images
`(e.g. on
`paper), or 2-dimensional images of 3-dimensional objects, or
`humanbeings.
`[0005] There is a need to provide such identification and
`hyperlink services to persons using mobile computing
`devices, such as Personal Digital Assistants (PDAs) and cel-
`lular telephones.
`[0006] There is a need to provide such identification and
`hyperlink services to machines, such as factory robots and
`spacecraft.
`Lxamples include:
`[0007]
`Identifying pictures or other art ina museum, where
`it is desired to provide additional information about such art
`objects to museum visitors via mobile wireless devices;
`[0008]
`provision of content (information, text, graphics,
`music, video, etc.), communications, and transaction mecha-
`nisms between companies and individuals, via networks
`(wireless or otherwise) initiated by the individuals “pointing
`and clicking” with camera-equipped mobile devices on
`magazine advertisements, posters, billboards, consumer
`products, music or video disks or tapes, buildings, vehicles,
`etc.;
`establishment of a communications link with a
`[0009]
`machine, such a vending machine or information kiosk, by
`“pointing and clicking” on the machine with a camera-
`equipped mobile wireless device and then execution of com-
`munications or transactions between the mobile wireless
`device and the machine;
`[0010]
`identification of objects or parts in a factory, such as
`on an assemblyline, by capturing an image ofthe objects or
`parts, and then providing information pertinentto the identi-
`fied objects or parts;
`
`identification of a part of a machine, such as an
`[0011]
`aircraft part, by a technician “pointing and clicking” on the
`part with a camera-equipped mobile wireless device, and then
`supplying pertinent content to the technician, such mainte-
`nanceinstructionsor historyfor the identified part;
`[0012]
`identification or screening of individual(s) by a
`security officer “pointing and clicking” a camera-equipped
`mobile wireless device at the individual(s) and then receiving
`identification information pertinent to the individuals after
`the individuals have beenidentified by face recognitionsoft-
`ware:
`
`identification, screening, or validation of docu-
`[0013]
`ments, such as passports, by a securily officer “pointing and
`clicking” a camera-equipped device at the document and
`receiving a response from a remote computer;
`[0014]
`determination ofthe position and orientation of an
`object in space by a spacecraft nearby the object, based on
`imagery of the object, so that the spacecraft can maneuver
`relative to the object or execute a rendezvouswith the object;
`[0015]
`identification of objects from aircraft or spacecraft
`by capturing imagery ofthe objects and thenidentifying the
`objects via image recognition performed on a local or remote
`computer;
`[0016] watching movie previews streamed to a camera-
`equipped wireless device by “pointing and clicking” with
`such a device on a movic theatre sign or poster, or on a digital
`video disc box or videotape box:
`[0017]
`listening to audio recording samples streamed to a
`camera-equipped wireless device by “pointing and clicking”
`with such a device on a compact disk (CD) box, videotape
`box, or print media advertisement;
`[0018]
`purchasing movie, concert, or sporting event tickets
`by “pointing and clicking” on a theater, advertisement, or
`other object with a camera-equipped wireless device;
`[0019]
`purchasing an item by “pointing and clicking” on
`the object with a camera-equipped wireless device and thus
`initiating a transaction;
`[0020]
`interacting with television programming by “point-
`ing and clicking” at the television screen with a camera-
`equipped device, thus capturing an imageof the screen con-
`tent and having that image sent to a remote computer and
`identified, thus initiating interaction based on the screen con-
`tent received (an example is purchasing an item onthe tele-
`vision screen by “pointing and clicking”at the screen when
`the item is on the screen);
`[0021]
`interacting with a computer-systembased game and
`with other players of the game by “pointing and clicking” on
`objects in the physical environment that are considered to be
`part of the game;
`‘
`[0022]
`paying a busfare by “pointing and clicking” with a
`mobile wireless camcra-cquipped device, on a fare machine
`in a bus, and thus establishing a communications link
`between the device and the fare machine and enabling the fare
`paymenttransaction;
`[0023]
`establishment of a communication between a
`mobile wireless camera-equipped device and a computer
`with an Internet connection by “pointing and clicking” with
`the device on the computer and thus providing to the mobile
`device an Internet address at which it can communicate with
`the computer, thus establishing communications with the
`computer despite the absence of a local network or any direct
`communication betweenthe device and the computer;
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 16 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 16 of 1115
`
`
`
`US 2009/0141986 Al
`
`Jun. 4, 2009
`
`[0024] uscofa mobile wireless camera-cquipped device as
`a point-of-sale terminalby, for example,“pointing and click-
`ing” on an item to be purchased,thus identifying the item and
`initiating a transaction;
`
`DISCLOSURE OF INVENTION
`
`[0025] The present invention solves the above stated needs.
`Once an image is captured digitally, a search of the image
`determines whether symbolic content
`is included in the
`image. If so the symbol is decoded and communication is
`opened with the proper database, usually using the Internet,
`wherein the best match for the symbol is returned. In some
`instances, a symbol may be detected, but non-ambiguous
`identification is not possible. In that case and when a sym-
`bolic image can not be detected, the image is decomposed
`through identification algorithms where unique characteris-
`tics of the image are determined. These characteristics are
`then used to provide the best match or matches in the data
`base, the “best” determination being assisted by the partial
`symbolic information,if that is available.
`[0026] Therefore the present invention provides technol-
`ogyand processes that can accommodate linking objects and
`images to information via a network such as the Internct,
`which requires no modification to the linked object. Tradi-
`tional methods for linking objects to digital information,
`including applying a barcode, radio or optical transceiver or
`transmitter, or some other means of identification to the
`object, or modifying the image or object so as to encode
`detectable information in it, are not required because the
`imageor object can be identified solely by its visual appear-
`ance. ‘lhe users or devices mayeven interact with objects by
`“linking”to them. For example, a user maylink to a vending
`machineby “pointing and clicking”on it. His device would be
`connected over the Internet to the company that owns the
`vending machine. The company would in turn establish a
`connection to the vending machine, and thus the user would
`have a communication channel established with the vending
`machine and couldinteract with it.
`[0027] The decompositionalgorithmsofthe present inven-
`tion allow fast and reliable detection and recognition of
`images and/or objects based on their visual appearance in an
`image, no matter whether shadows, reflections, partial obscu-
`ration, and variations in viewing geometry are present. As
`stated above, the present invention also can detect, decode,
`and identify images and objects based on traditional symbols
`which may appear on the object, such as alphanumeric char-
`acters, barcodes, or 2-dimensional matrix codes.
`[0028] Whena particular object is identified, the position
`and orientation ofan object with respectto the userat the time
`the image was captured can be determined based on the
`appearanceof the object in an image. This can bethe location
`and/or identity of people scanned by multiple cameras in a
`securily system, a passive locator system more accurate than
`GPSor usable in areas where GPSsignals cannot be received,
`the location ofspecific vehicles without requiring a transmis-
`sion from the vehicle, and many otheruses.
`[0029] When the present invention is incorporated into a
`mobile device, such as a portable telephone, the user of the
`device can link to images and objects in his or her environ-
`ment by pointing the device at the object of interest, then
`“pointing and clicking” to capture an image. Thereafter, the
`device transmits the image to another computer (“Server”),
`wherein the image is analyzed and the object or image of
`interest is detected and recognized. Thenthe network address
`
`of information corresponding to that object is transmitted
`from the (‘Server’) back to the mobile device, allowing the
`mobile device to access information using the network
`address so that only a portion of the information concerning
`the object need be stored in the systems database.
`including
`[0030]
`Some or all of the image processing,
`image/object detection and/or decoding of symbols detected
`in the image may bedistributed arbitrarily between the
`mobile (Client) device and. the Server. In other words, some
`processing maybe performedin the Client device and some in
`the Server, without specification ofwhich particular process-
`ing is performedin each,or all processing may be performed
`on one platformor the other, or the platforms may be com-
`bined sothat there is only one platform. The image processing
`can be implemented in a parallel computing manner, thus
`facilitating scaling ofthe system with respect to database size
`and input traffic loading.
`[0031] Therefore, it is an object of the present invention to
`provide a system and process for identifying digitally cap-
`tured images without requiring modification to the object.
`[0032] Another object is to use digital capture devices in
`ways never contemplated by their manufacturer.
`[0033] Another object is to allow identification of objects
`from partial views of the object.
`[0034] Another object is to provide communication means
`with operative devices without requiring a public connection
`therewith.
`[0035] These and other objects and advantages of the
`present invention will become apparentto those skilled in the
`art after considering the following detailed specification,
`together with the accompanying drawings wherein:
`
`BRIFF DESCRIPTION OF THE DRAWINGS
`
`TFIG. 1 is a schematic block diagram top-level algo-
`[0036]
`rithm flowchart;
`[0037]
`FIG. 2 is an idealized view of image capture;
`[0038]
`FIGS. 3A and 3B are a schematic block diagram of
`process details of the present invention;
`[0039]
`FIG. 4 is a schematic block diagram of a different
`explanation ofinvention;
`[0040]
`FIG. 5is aschematic block diagramsimilar to FIG.
`4 for cellular telephone and personal data assistant (PDAQ
`applications; and
`[0041]
`FIG. 6 is a schematic block diagram for spacecraft
`applications.
`
`BEST MODES FOR CARRYING OUT THI
`
`
`INVENTION
`
` Ga
`
`invention includes a novel process
`[0042] The present
`wherebyinformation such as Internet content is presented to
`auscer, based solely on a remotely acquired image of a physi-
`cal object. Although coded information can be includedin the
`remotely acquired image,il is not required since no additional
`information about a physical object, other than its image,
`needs to he encodedin the linked object. There is no need for
`anyadditional code or device, radio, optical or otherwise, to
`be embeddedinoraffixed to the object. Image-linked objects
`can be located and identified within user-acquired imagery
`solely by meansofdigital image processing, with the address
`of pertinent information being returned to the device used to
`acquire the image and perform thelink. This process is robust
`against digital image noise and corruption (as can result from
`lossy image compression/decompression), perspectiveerror,
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 17 of 1115
`
`BANK OF AMERICA
`
`IPR2021-01080
`
`Ex. 1002, p. 17 of 1115
`
`
`
`US 2009/0141986 Al
`
`Jun. 4, 2009
`
`illumination varia-
`rotation, translation, scale differences,
`tions caused by different lighting sources, and partial obscu-
`ration of the target that results from shadowing,reflection or
`blockage.
`[0043] Manydifferent variations on machinevision “target
`location and identification” exist in the current art. However,
`they all tend to provide optima! solutions for an arbitrarily
`restricted search space. Atthe heart ofthe present inventionis
`a high-speed image matching engine that returns unambigu-
`ous matchesto largel objects contained in a wide variety of
`potential input images. This unique approach to image match-
`ing takes advantageofthe fact that at least some portionofthe
`target object will be found in the user-acquired image. The
`parallel image comparison processes embodied in the present
`search technique are, when taken together, unique to the
`process. Further, additional refinement of the process, with
`the inclusion of more and/ordifferent decomposition-param-
`eterization functions, utilized within the overall structure of
`the search loops is not restricted. The detailed process is
`describedin the following. FIG. 1 showsthe overall process-
`ing flow andsteps. These steps are described in further detail
`in the following sections.
`[0044]
`For image capture 10, the User 12 (FIG. 2) utilizes a
`computer, mobile telephone, personal digital assistant, or
`other similar device 14 equipped with an image sensor (such
`as a CCD or CMOSdigital camera). The User 12 aligns the
`sensor ofthe image capture device 14 with the object 16 of
`interest. The linking process is then initiated by suitable
`means including: the User 12 pressing a button on the device
`14 or sensor; by the software in the device 14 automatically
`recognizing that an image is to be acquired; by User voice
`command; or by any other appropriate means. ‘he device 14
`capturesa digital image 18 ofthe sceneat whichit is pointed.
`This image 18 is represented as three separate 2-D matrices of
`pixels. corresponding to the raw RGB (Red, Green, Blue)
`representation ofthe input image. For the purposes ofstan-
`dardizing the analytical processes in this embodiment,if the
`device 14 supplies an image in other than RGB format, a
`transformation to RGB is accomplished. These analyses
`could be carried out in anystandard color format, should the
`needarise.
`
`If the server 20 is physically separate from the
`[0045]
`device 14, then user acquired imagesare transmitted from the
`device 14 to the Image Processor/Server 20 using a conven-
`tional digital network or wireless network means. Ifthe image
`18 has been compressed (e.g. via lossy JPEG DCT) in a
`mannerthat introduces compressionartifacts into the recon-
`structed image 18, these artifacts maybe partially removed
`by, for example, applying a conventional despeckle filter to
`the reconstructed imageprior to additional processing.
`[0046] The Image Type Determination 26 is accomplished
`with a discriminator algorithm which operates on the input
`image 18 and determines whether the input image contains
`recognizable symbols, such as barcodes, matrix codes, or
`alphanumeric characters. If such symbols are found,
`the
`image18 is sent to the Decode Symbol 28 process. Depend-
`ing on the confidence level with which the discriminator
`algorithm finds the symbols, the image 18 also mayor alter-
`natively contain an object of interest and maytherefore also
`or alternatively be sent to the Object Image branch of the
`process flow. For example, ifan input image 18 contains both
`a barcode and an object, depending onthe clarity with which
`the barcode is detected, the image maybe analyzed by both
`the Object Image and Symbolic Image branches, and that
`
`branch whichhas the highest success in identification will be
`used to identify and link from the object.
`[0047] The image is analyzed to determine the location,
`size, and nature ofthe symbols in the Decode Symbol 28. The
`symbols are analyzed according to their type, and their con-
`tent information is extracted. For example, barcodes and
`alphanumeric characters will result in numerical and/or tex
`information.
`
`For object images, the present invention performs a
`[0048]
`“decomposition”, in the Input Image Decomposition 34, of a
`high-resolution inpul image into several different types o
`quantifiable salient parameters. 'lhis allows for multiple inde-
`pendent convergent search processesofthe database to occur
`in parallel, which greatly improves image match speed and
`match robustness in the Database Matching 36. The Bes
`Match 38 fromeither the Decode Symbol 28, or the image
`Database Matching 36, or both,is then determined. Ifa spe-
`cific URL (or other online address) is associated with the
`image, then an URL Lookup 40is performedandthe Interne
`address is returned by the URL Return 42.
`[0049] The overall flow of the Input Image Decomposition
`process is as follows:
`
`
`
`Radiometric Correction
`Segmentation
`Segment Group Generation
`FOReach segment group
`Bounding Box Generation
`Geometric Normalization
`Wavelet Decomposition
`Color Cube Decomposition
`Shape Decomposition
`Low-Resolution Grayscale Image Generalion
`FOR END
`
`Eachofthe abovesteps is explained in further detail
`[0050]
`below. For Radiometric Correction, the input image typically
`is transformed to an 8-bit per color plane, RGB representa-
`tion. The RGB imageis radiometrically normalized in all
`three channels. This normalization is accomplished by linear
`gain and offset transformationsthat result in the pixel values
`within each color channel spanning a full 8-bit dynamic range
`(256 possible discrete values). An 8-bit dynamic range is
`adequate but, of course, as optical capture devices produce
`higher resolution images and computers get faster and
`memory gels cheaper, higher bit dynamic ranges, such as
`16-bit, 32-bit or more may be used.
`[0051]
`For Segmentation, the radiometrically normalized
`RGB imageis analyzed for “segments,” or regions ofsimilar
`color, i.e. near equal pixel values for red, green, and blue.
`These segments are defined bytheir boundaries, which con-
`sist of sets of (x, y) point pairs. A map of segment boundaries
`is produced, which is maintained separately from the RGB
`input image and is formatted as an x, y binary image map of
`the sameaspectratio as the RGB image.
`[0052]
`For Segment Group Generation, the segments are
`grouped into all possible combinations. These groups are
`knownas “segment groups” and representall possible poten-
`tial images or objects of interest in the input image. The
`segment groups are sorted based on the order in which they
`will be