`(12) Patent Application Publication (10) Pub. No.: US 2013/0124961 A1
`(43) Pub. Date:
`May 16, 2013
`Linburn
`
`US 2013 0124961A1
`
`(54) USER INTERFACES AND METHODS TO
`CREATE ELECTRONIC DOCUMENTS WITH
`FORMS IMPLEMENTING CONTENT INPUT
`FIELDS
`
`(75) Inventor: Carol E. Linburn, San Francisco, CA
`(US)
`(73) Assignee: Adobe Systems Incorporated, San Jose,
`CA (US)
`
`(21) Appl. No.: 11/731,687
`
`(22) Filed:
`
`Mar. 30, 2007
`
`Publication Classification
`
`(2006.01)
`
`(51) Int. Cl.
`G06F 7/24
`(52) U.S. Cl.
`CPC .................................... G06F 17/243 (2013.01)
`USPC .......................................................... 71.5/224
`ABSTRACT
`(57)
`Embodiments of the invention relate generally to computing
`devices and systems, software, computer programs, applica
`tions, and user interfaces, and more particularly, to imple
`menting content input fields in forms to create electronic
`documents, among other things.
`
`300
`N 302
`File View Control Help
`
`Interface
`
`304
`
`Electronic Form
`
`305
`
`306a
`
`306C
`
`306d
`
`310
`
`
`
`Text input field 1
`(e.g., Driver Name)
`
`Text input field 2
`(e.g., Address)
`
`Text input field 3
`(e.g., license no.)
`
`Text input field 4
`
`Image input Field
`(e.g., Description of damage)
`
`Content input Field
`
`Netflix v. GoTV
`IPR2023-00758
`Netflix Ex. 1018
`
`
`
`Patent Application Publication May 16, 2013 Sheet 1 of 10
`
`US 2013/O124961 A1
`
`333.
`
`s
`
`: 8
`3:33
`
`::::::ge
`{8:338 3:3:
`
`
`
`{{{:
`838:::::::::::
`::::::::::::::::
`
`
`
`3.
`
`$82
`
`i44
`
`388: 38g:
`i:38:::::8
`
`C. A
`
`
`
`Patent Application Publication
`
`May 16, 2013 Sheet 2 of 10
`
`US 2013/O124961 A1
`
`
`
`
`
`:::::::::
`
`
`
`:::::::::::: :
`
`838g:
`{8::::::::::::::
`
`C. 3
`
`
`
`Patent Application Publication May 16, 2013 Sheet 3 of 10
`
`US 2013/O124961 A1
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`23 fie:8
`
`: “a Ciaracteristic
`3:
`,
`
`:-w:....."
`
`*::::::
`8::::::::::
`
`::::::::::::
`*::::::8:
`
`A.
`x - - - - - -
`
`$g:::::::::8
`8:38:::
`
`
`
`
`
`::::::::::::
`
`interg:8:
`
`
`
`
`
`Patent Application Publication May 16, 2013 Sheet 4 of 10
`
`US 2013/O124961 A1
`
`305
`
`306a
`
`306
`
`306C
`
`3060
`
`310
`
`300
`Y 302
`File View Control Help
`
`
`
`Interface
`
`Electronic Form
`
`Text input field 1
`(e.g., Driver Name)
`
`Text input field 2
`(e.g., Address)
`
`Text input field 3
`(e.g., license no.)
`
`Text input field 4
`(e.g., time of accident)
`
`Image input Field
`(e.g., Description of damage)
`
`Content input Field
`
`FIG. 3
`
`
`
`Patent Application Publication
`
`May 16, 2013 Sheet 5 of 10
`
`US 2013/O124961 A1
`
`400 N
`
`
`
`Interface
`
`Electronic Form
`
`Text input field 1
`(e.g., Driver Name)
`
`N1 CHIGAN
`
`AMPLE
`
`GREAT LAKEs
`Image input Field 1
`(e.g., license plate no.)
`
`Image input Field 2
`(e.g., Description of front,
`passenger-side fender)
`
`Image input Field 3
`(e.g., Description of front,
`driver-side fender)
`
`Image input Field 4
`(e.g., Description of rear,
`passenger-side fender)
`
`Image input Field 5
`(e.g., Description of rear,
`driver-side fender)
`
`FIG. 4
`
`
`
`Patent Application Publication May 16, 2013 Sheet 6 of 10
`
`US 2013/O124961 A1
`
`500
`Y
`
`504
`
`--
`
`generate a form including a content data input
`field on an interface
`
`506
`
`activate an image capture unit
`
`508
`
`display in real-time an image within the
`content data input field
`
`
`
`510
`
`capture at least a portion of a subject in the
`Image
`
`
`
`
`
`Finalize as
`an electronic
`document
`
`512
`
`Change the image data
`directly within the form
`
`
`
`516-N-
`
`Integrate the image into
`the form
`
`518
`
`Create electronic
`document
`
`
`
`Patent Application Publication May 16, 2013 Sheet 7 of 10
`
`US 2013/O124961 A1
`
`600 N
`
`602-N-displaying an image on an interface
`within an image data input field of a
`form
`
`
`
`
`
`608
`
`Modify image
`while displaying
`form
`
`
`
`
`
`
`
`614--
`
`n
`
`616
`
`Edit image
`while displaying
`
`- "of
`
`Validate
`
`620-a Validate image
`
`n
`
`622
`
`Integrate the image into
`the form to create an
`electronic document
`
`64-C end )
`
`FIG. 6
`
`
`
`Patent Application Publication May 16, 2013 Sheet 8 of 10
`
`US 2013/O124961 A1
`
`CD 702
`
`710
`
`704
`
`716
`
`Panel Generator
`714
`
`Display Module
`
`TO/From
`
`operating
`system, or
`display
`
`
`
`Logic Module
`712
`
`
`
`Renderind Endine
`g bing
`
`708
`
`Form Rend
`
`790
`
`Editor
`
`792
`
`Validator
`794
`Content rend-1796
`
`FIG. 7A
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Panel Generator
`
`
`
`22
`
`Lodi ogic Module
`
`72
`
`TO/From
`
`
`
`Operating
`system, or
`display
`
`726
`
`Display Module
`
`728
`
`Rendering Engine
`730
`
`FIG. 7B
`
`
`
`Patent Application Publication May 16, 2013 Sheet 9 of 10
`
`US 2013/O124961 A1
`
`Storage
`8x:
`
`poor 802
`; : {3:8::ca:38
`:
`i:38:388
`
`*838:8:
`8:
`
`3:38: 8.
`
`.-...-...-
`
`settiere:
`:
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Patent Application Publication May 16, 2013 Sheet 10 of 10
`
`US 2013/O124961 A1
`
`
`
`
`
`US 2013/O124961 A1
`
`May 16, 2013
`
`USER INTERFACES AND METHODS TO
`CREATE ELECTRONIC DOCUMENTS WITH
`FORMS IMPLEMENTING CONTENT INPUT
`FIELDS
`
`FIELD OF THE INVENTION
`0001 Embodiments of the invention relate generally to
`computing devices and systems, software, computer pro
`grams, applications, and user interfaces, and more particu
`larly, to implementing content input fields in forms to create,
`for example, electronic documents.
`
`BACKGROUND OF THE INVENTION
`0002 Improved communications networks and electronic
`display technologies have contributed to the adoption of elec
`tronic documents as a principle vehicle for exchanging and
`memorializing information. To create a conventional elec
`tronic document, users typically enter alpha-numeric charac
`ters—as text—into text input fields of an electronic form,
`such as an HTML-based or XML-based form. To create tra
`ditional electronic documents with pictures, however, users
`are usually required to import an image from a picture file.
`Browser-based applications, for example, can be used to cre
`ate web-based electronic documents by prompting a user to
`search for a picture file in some sort of file management
`system (e.g., a hierarchy of folders). Once the user finds the
`picture file, the associated picture will be imported into the
`web-based electronic document. While functional, there are
`certain drawbacks to these techniques for creating electronic
`documents.
`0003. One drawback to importing images into electronic
`documents is that a user is typically required to perform
`multiple steps, and, thus, is burdened to know—a priori the
`file name, and to expend efforts to search for the picture file.
`Another drawback is that users typically edit pictures using a
`separate application in a different window, thereby necessi
`tating a transition from the presentation of the electronic form
`(e.g., in one window) to a picture editor (e.g., in another
`window). Often times, later-opened windows that include a
`picture editor obscure previously-opened windows that
`include the electronic document. While some current appli
`cations use pictures to create printed documents, such as
`name tags, these applications usually implement static tem
`plates that include unmodifiable graphics and text, and can
`require multiple windows to construct a printed document. A
`static template can typically be described as an electronic file
`having a preset, customized format and structure that is used
`as a starting point for a particular application so that the
`unmodifiable graphics and text of the template need not be
`recreated each time it is used. Further, multiple windows can
`obscure each other, especially when presented on relatively
`Small interfaces, such as those on mobile phones and personal
`digital assistants (“PDAs).
`0004. It would be desirable to provide improved tech
`niques, systems and devices that minimize one or more of the
`drawbacks associated with conventional techniques for cre
`ating electronic documents.
`
`BRIEF DESCRIPTION OF THE FIGURES
`0005. The invention and its various embodiments are more
`fully appreciated in connection with the following detailed
`description taken in conjunction with the accompanying
`drawings, in which:
`
`0006 FIG. 1A is a diagram of an interface implementing
`one or more content input fields to create electronic docu
`ments, according to at least one embodiment of the invention;
`0007 FIG. 1B is another diagram of an interface imple
`menting a content input field to create electronic documents
`with a hand-held device, according to at least one other
`embodiment of the invention;
`0008 FIG. 2 is a diagram of a controller for generating in
`multiple layers to create an interface implementing a content
`input field, according to at least one embodiment of the inven
`tion;
`0009 FIGS. 3 and 4 are examples of interfaces imple
`menting one or more content input fields, according to vari
`ous embodiments of the invention;
`0010 FIG. 5 is a flow diagram depicting one example of a
`method for creating an electronic document, according to one
`embodiment of the invention;
`0011
`FIG. 6 is a flow diagram depicting another example
`of a method for creating an electronic document, according to
`another embodiment of the invention;
`(0012 FIGS. 7A and 7B illustrate examples of panel pre
`sentation applications for implementing content input fields
`in electronic forms, according to various embodiments of the
`invention;
`0013 FIG. 8 illustrates an example of a computer system
`Suitable for implementing content input fields in electronic
`forms for an interface, according to at least one embodiment
`of the invention; and
`0014 FIG. 9 illustrates an example of a panel presentation
`system for implementing content input fields in electronic
`forms.
`0015. Like reference numerals refer to corresponding
`parts throughout the several views of the drawings. Note that
`most of the reference numerals include one or two left-most
`digits that generally identify the figure that first introduces
`that reference number.
`
`DETAILED DESCRIPTION
`0016 FIG. 1A is a diagram 100 of an interface implement
`ing one or more content input fields to create, for example,
`electronic documents, according to at least one embodiment
`of the invention. In the context of creating an electronic docu
`ment 140, interface 110 is configured to implement a content
`input field 124 as at least part of an electronic form 120.
`Content input field 124 is configured to accept content for
`presentation coincident (or Substantially coincident) with the
`presentation of electronic form 120 in interface 110. In one
`embodiment, content input field 124 operates to sample con
`tent responsive to a user input. Such as an input to select
`content input field 124. As such, a user can preview the
`content in the context of electronic form 120 prior to creating
`electronic document 140. Content input field 124, therefore,
`enables a user to sample content and to optionally modify the
`content before finalizing electronic document 140 by, for
`example, integrating the modified content into electronic
`form 120.
`0017. In view of the foregoing, content input field 124
`enables a user to readily integrate content into electronic
`documents, thereby enhancing the functionality of electronic
`document 140. Namely, content input field 124 facilitates the
`addition of content, such as audio, graphics, animation, video,
`still images, and/or interactivity, to Supplement text input
`fields text (not shown) as well as other alpha-numeric char
`acter input fields. Further, content input field 124 enables a
`
`
`
`US 2013/O124961 A1
`
`May 16, 2013
`
`user to preview content and modify the content in the context
`of electronic form 120, without requiring, for example, a
`transitionaway (orinterrupting the display thereof) to another
`panel. In addition, content inputfield 124 can be configured to
`directly present content, Such as images, thereby obviating
`the necessity to import content from a content file. Such as a
`file containing image data. In at least one embodiment, a user
`can edit the content in relation to (e.g., directly in or in
`association with) content input field 124, thereby eliminating
`a requirement to access an editing application for purposes of
`editing the content.
`0018 To illustrate, consider the example in FIG. 1A in
`which content input field 124 is configured to accept image
`data that represents one or more images. Note that the term,
`“image' can refer, in at least one embodiment, to a still image
`or a collection of images that constitute video. Content input
`field 124, in this example, is an image data input field for
`electronic form 120 and is configured to accept imagery as
`content. As such, content input field 124 in whole or in
`part—an be configured to present an image in real-time (or
`substantially in real-time) for an interface 110. Next, consider
`that electronic form 120 is configured to accept user input via
`user input interface 144. In one embodiment, selection of
`content input field 124 activates an image capture unit 102.
`thereby facilitating entry of image data into content input field
`124. In this example, the image data represents a scene
`including a mountain range as a Subject 108. Once selected,
`content input field 124 can sample images in a continual
`fashion until, for example, user input interface 144 initiates
`the capture of at least a portion 130 of subject 108. As shown,
`portion 130 represents a captured image within a boundary
`122. Thus, content input field 124 and electronic form 120
`provide for a preview of electronic document 140 prior to
`finalization. In one embodiment, content input field 124 and
`electronic form 120 can be rendered in a single panel.
`0019. Further to the example shown, consider that a user
`wishes to modify the content data in association with content
`input field 124 by, for example, recapturing another portion
`132 of subject 108. The modified image can be displayed
`simultaneous (or Substantially simultaneous) with electronic
`form 120. To modify the content data (i.e., image data), user
`input interface 144 can be configured to dispense with image
`data for portion 130, and to resample portions of subject 108
`in real-time. In one embodiment, the modified image for a
`recaptured portion 132 is formed by translating either one or
`more images relative to boundary 122, or vise versa. For
`instance, a user can move image capture unit 102 relative to a
`fixed position in space to translate the images relative to
`boundary 122. In one embodiment, the user can translate
`portions of subject 108 to include a characteristic of a subject.
`For example, a user can translate from the image associated
`with portion 130 to the image associated with portion 132 to
`include a tree as a characteristic. In an alternative embodi
`ment, boundary 122 can demarcate a time interval during
`which a portion of audio or a sound can be captured, if the
`content includes audio.
`0020. In one embodiment, interface 110 is configured to
`accept user inputs via user input interface 144 to edit the
`content, thereby forming edited content. In a specific embodi
`ment, in-situ editor 146 can be configured to edit content data
`associated with content input field 124. Continuing with the
`previous example, in-situ editor 146 can be implemented to
`perform image-related edits, such as cropping an image, per
`forming an automatic color balance or brightness balance,
`
`performing gamma corrections, performing red-eye removal,
`and the like. Further, in-situ editor 146 can be configured to
`edit a captured image, as well as recaptured images, within
`content input field 124 to generate an edited image in the
`context of electronic form 120. In various embodiments, in
`situ editor 146 can be configured to edit captured images in
`portions 130 or 132 within content input field 124 without
`transitioning from electronic form 120 to implement an editor
`application, Such as a stand-alone photo or video editing
`application. In one embodiment, electronic form 120 can
`present editor inputs 190 to select an image-related edit
`operation in the context of the form. In the example shown,
`editor inputs 190 can be presented as part of a panel that
`includes electronic form 120. Once a captured image (or
`recaptured image) has been modified and/or edited, it can be
`finalized to form electronic document 140. For example, por
`tion 132 can be integrated into form 120 to create electronic
`document 140 with image 142.
`0021. As used herein, the term “content input field’ refers
`generally, at least in one embodiment, to a data field that
`accepts content in real-time (or Substantially in real-time) in
`association with an electronic form, whereby the content
`associated with the content input field can be integrated with
`an electronic form to create an electronic document. As used
`herein, the term “content, at least in one embodiment, refers
`to information and/or material (e.g., multi-media informa
`tion) presented within an interface in relation to, for example,
`a web site or a data entry application, Such as a Software
`product, for creating electronic documents. Content can also
`include the audio and/or visual presentation of text, such as an
`electronic document (e.g., a document in Portable Document
`Format (“PDF)), as well as audio, images, audio/video
`media, Such as Flash presentations, text, and the like. As used
`herein, the term “modified content” refers generally, at least
`in one embodiment, to content that has been either recaptured
`after a previous capture, or edited, or both. As used herein, the
`term "sampling” refers generally, at least in one embodiment,
`to receiving digitized representations of content in real-time,
`Such as visual imagery generated by an image capture unit or
`device, for purposes of capturing content (e.g., as video or a
`still image). As used herein, the term "capture' refers gener
`ally, at least in one embodiment, to the storage and/or recor
`dation of data representing images, sound, or the like. For
`example, a captured image can be still image (e.g., a “freeze
`frame’) representing a portion of a Subject. As another
`example, a captured video or audio can be a portion of video
`or audio for a finite duration of time.
`0022. As used herein, the term “subject, can refer to, at
`least in one embodiment, a person or thing photographed, as
`well as a Sound, Voice or music recorded. As used herein, the
`term “panel.” at least in one embodiment, can refer to dis
`plays, palettes, tabs, windows, screens, portions of an inter
`face, and the like. As used herein, the term "electronic form.”
`can refer, at least in one embodiment, to an interactive form
`having data input fields, including a content input field. In one
`embodiment, each of the data input fields is implemented in
`single panel. In other embodiments, the data input fields for
`the electronic form can be distributed over multiple panels. As
`used herein, the term "electronic document, can refer, at least
`in one embodiment, to any data files (e.g., other than com
`puter programs or system files) that are intended to be used in
`their electronic form, without necessarily being printed,
`whereby computer networks and electronic display technolo
`gies can help facilitate their use and distribution. Note that an
`
`
`
`US 2013/O124961 A1
`
`May 16, 2013
`
`electronic document can itself be defined as content, at least
`in one embodiment. As such, the electronic document can
`enable users to experience rich media content that is created
`with a content input field. As used herein, the term “integrat
`ing.” as it relates to finalizing electronic documents can refer,
`at least in one embodiment, to generating an association
`between content and an electronic form, thereby combining
`the two for producing an electronic document. In some cases,
`the content is affixed as part of the electronic form, whereas in
`other cases, the content can be separable from the electronic
`document. When affixed, the content can be stored within a
`common data file as the electronic form, whereas separable
`content can be stored as a separate content file.
`0023 FIG.1B is a diagram 150 of an interface implement
`ing a content input field to create electronic documents with a
`hand-held device, according to at least one other embodiment
`of the invention. Hand-held device 180 is configured to
`include an interface 182 for displaying content (or represen
`tations thereof) within a content input field, such as content
`input field 174, and one or more user inputs 184. Interface 182
`can be used to create an electronic document 190 that includes
`an image 192 captured in the association with the content
`input field. In the example shown, interface 160 can be imple
`mented as interface 182. Interface 160 is configured to imple
`ment content input field 174 and one or more other input fields
`176 for data entry, such as text input data and the like. Further,
`hand-held device 180 can include a controller 199 that is
`configured to coordinate the functionalities of interface 160,
`user input interface 194, and image capture unit 196.
`0024. In at least one embodiment, hand-held device 180 is
`a mobile phone having relatively limited resources, such as a
`minimal interface 182 and user input interface 194. Minimal
`interface 182 is typically smaller than displays used in laptop
`and desktop computing applications. Controller 199 can be
`configured to generate content input field 174 within a single
`panel, thereby obviating instances where multiple panels
`might obscure either the electronic form or the content, or
`both. User input interface 194 is usually limited to directional
`keys (e.g., up, down, left and/or right), such as user inputs
`184, for navigating minimal interface 182. Controller 199 can
`also be configured to implement directional keys 184 to per
`form one or more operations with respect to the content in the
`context of interface 182. For example, a user can use direc
`tional keys 184 to navigate electronic form 170 to select
`content input field 174, which, in turn, causes content to be
`presented in association with content input field 174. For
`instance, selection of content input field 174 can initiate
`streaming of digitized imagery from image capture unit 196
`to content input field 174. Directional keys 184 or other inputs
`in user input interface 194 can be implemented to capture,
`recapture, accept, reject, modify and/or edit the content. In
`various embodiments, controller 199 can be implemented in
`either hardware or software, or a combination thereof.
`0025 FIG. 2 is a diagram 200 of a controller for generating
`in multiple layers to create an interface implementing a con
`tent input field, according to at least one embodiment of the
`invention. In this example, controller 230 is configured to
`generate an image layer 208 and a form layer 204, whereby
`both image layer 208 and form layer 204 can constitute, in
`whole or in part, an electronic form. In one embodiment, form
`layer 204 can include a transparent (or Substantially transpar
`ent) content input field 206, where the transparency of content
`input field 206 facilitates the presentation of at least a portion
`of the image within a boundary 210. Further, controller 230 is
`
`configured to translate image layer 208 to translate the image
`relative to transparent content input field 206 and boundary
`210. For example, translating image layer 208 in the X, Y,
`and/or Z spatial dimensions can provide for capturing a por
`tion of the image at one point in time, and can further provide
`for recapturing another portion of the image at an another
`point in time. An example of one Such portion is portion 210a,
`which is defined by boundary 210. Translating the image, as
`presented through transparent content inputfield 206, thereby
`can operate to modify the image to form a modified image. In
`another embodiment, controller 230 can operate to translate
`form layer 204 relative to image layer 208. In at least one
`embodiment, controller 230 can capture (e.g., Store) image
`data constituting image layer 208, while enabling a user to
`Subsequently translate image layer 208 to position portion
`210a with respect to content input field 206.
`0026. In a specific embodiment, controller 230 can
`include a form renderer 234 configured to generate form layer
`204, and a content renderer 236 configured to present content
`in association with content input field 206. In the example
`shown in FIG. 2, content renderer 236 can generate image
`layer 208 for presenting an image as content. In other embodi
`ments, content renderer 236 can generate an audio layer 220
`to associate a sound, Voice and/or audio data to content input
`field 206. Controller 230 can be used to capture a portion
`210b of a sound, Voice and/or audio, responsive, for example,
`to a user input.
`0027 Optionally, controller 230 can include either an in
`situ editor 232 or a validator 238, or both. Further, controller
`230 can optionally include interpreter 239. In-situ editor 232
`can be configured to edit the data representing captured con
`tent. In one embodiment, in-situ editor 232 can be configured
`to present an editor layer 202 as a content editor for editing the
`content. As such, editor layer 202 is configured to accept user
`input to edit the content in-situ to form edited content. For
`example, in-situ editor 232 facilitates the presentation of the
`edited content with respect to content input field 206 simul
`taneous to (or Substantially simultaneous to) the presentation
`of form layer 204. In some cases, in-situ editor 232 can, for
`example, crop, remove red eye, modify the contrast and color,
`and/or other enhance an image. In other cases, in-situ editor
`232 can, for example, cut, copy, delete, splice, and mix
`Sounds together, as well as modify the speed or pitch of a
`captured audio, as well as amplify, normalize, or equalize the
`audio.
`0028. Validator 238 can be configured to analyze the con
`tent to determine whether any captured content associated
`with content input field 206 matches or (likely matches) a
`predetermined metric. In one embodiment, content input field
`206 can relate via an association 240 to a characteristic 242 of
`the content sought to be captured. If there is a match, then
`validator 238 is configured to present an indication (not
`shown) that content input field 206 includes characteristic
`242. For example, validator 238 can operate to determine
`whether portion 210a includes (or likely includes) a tree 211
`as a characteristic of the subject. Thus, validator 238 can
`validate whether image data representing the image in portion
`210a meet a first threshold indicative that the content in
`relation with content input field 206 includes characteristic
`242. The first threshold, for example, can representanamount
`of green-colored pixels that are associated with image data in
`content input field 206.
`(0029 Interpreter 239 can be configured to interpret
`images and/or Sounds (or portions there) to determine infor
`
`
`
`US 2013/O124961 A1
`
`May 16, 2013
`
`mation. In a specific embodiment, interpreter 239 is config
`ured to test whether image data representing the image pre
`sented in content input field 206 meet a second threshold to
`determine whether the image data complies with one or more
`requirements. For example, the second threshold can repre
`sent a confidence level that the shape of tree 211 matches at
`least one of a number of predetermined shapes for trees. If
`there is a match, there can be a relatively high likelihood that
`tree 211 can be characterized as being a tree. In a specific
`embodiment, interpreter 239 can perform optical character
`recognition processes to translate optically-scanned bitmaps
`of text characters into character codes, such as ASCII. Thus,
`interpreter 239 can convert portions of an image in content
`input field 206 to determine one or more alpha-numeric char
`acters. In another embodiment, interpreter 239 can perform
`speech recognition to translate audio into alpha-numeric
`characters. In various embodiments, interpreter 239 can inter
`pret or convert content presented in association with content
`input field 206 into another form for implementation as an
`electronic document.
`0030 FIGS. 3 and 4 are examples of interfaces imple
`menting one or more content input fields, according to Vari
`ous embodiments of the invention. In FIG. 3, interface 300
`can include a pull-down menu 302 (e.g., “File.” “View.” etc.),
`a scroll bar 305, and an electronic form 304. In at least one
`embodiment, electronic form 304 can include any number of
`text input fields, such as text input field (1)306a, text input
`field (2) 306b, text input field (“3)306c, and text input
`field (“4”)306d. These text input fields can be configured to
`accept, for example, alpha-numeric characters as input data.
`Electronic form 304 can also include one or more content
`input fields, such as an image input field 310 and a content
`input field 320. In an exemplary application, consider that
`electronic form 304 is implemented as part of a computer
`program for processing the condition of an automobile. Such
`as used by a car repair shop or a car rental agency.
`0031 Consider that electronic form 304 can be used to
`memorialize damage to an automobile as well as related
`information. For example, input field (“1”) 306a, text input
`field (2) 306b, text input field (“3)306c, and text input
`field (“4”)306d can be configured to respectively accept data
`representing a driver name, an address, a license number, and
`a time of accident. A user of electronic form 304 can select
`310 to sample an image of an automobile 312 as a subject.
`While viewing automobile 312 in the context of image input
`field 310, the user can capture and recapture image data to
`optimally depict, for example, the extent of damage 314
`associated with automobile 312. Optionally, content input
`field 320 can be configured to accept any content as input,
`Such as a Sound. As such, when the user selects to accept
`content input field 320, a sound can be captured. For example,
`the user can record the driver's verbal account of the accident.
`In some cases, once the user is satisfied, electronic form 304
`can be used to produce an electronic document. In one
`embodiment, producing an electronic document includes, in
`whole or in part, saving data for electronic form 304 in asso
`ciation with other data that represents the captured image of
`automobile 312 and/or the captured sound corresponding to
`content input field 320.
`0032 FIG. 4 depicts an interface 400 including an elec
`tronic form 402, according to another embodiment of the
`invention. In this example, electronic form 402 includes an
`optional text input field (“1”) 404, and image input fields 406,
`408, 410, 412, and 414. To illustrate the functionality of
`
`electronic form 402, consider that text input field (1) 404,
`and image input fields 406, 408, 410, 412, and 414 are con
`figured to capture information about the condition of a rental
`car returned from a customer. Here, text input field (1) 404
`can be configured to accept user input. Such as the driver's
`name. Image input field (1) 406 can be configured to cap
`ture an image of a license plate 407 and license plate number.
`0033. A controller, which is not shown, can generate elec
`tronic form 402 and image input field 406. In one embodi
`ment, the controller can also be configured to test whether
`image data representing the image in image input field 406
`meets a threshold indicative that the image data complies with
`one or more requirements. As used herein, the term “require
`ment can refer to, at least in one embodiment, a threshold
`against which an interpreter analyzes content to interpret
`whether the content includes an identifiable characteristic. An
`example of Such a requirement is that the captured license
`plate number in image input field 406 must match a predeter
`mined license plate number for the returned automobile.
`Thus, the controller can be configured to identify the auto
`mobile. In a specific embodiment, an interpreter (not shown)
`can perform optical character recognition processes to deter
`mine a string of characters, such as "SAMPLE.” and then
`determine whether SAMPLE has access to other information
`stored in relation to that license plate number. In this case, an
`interpreter can determine that each of the letters meet a
`requirement, or threshold, for identifying each of the letters as
`S, A, M. P. L., and E. A validator, in turn, can determine
`whether the license plate number (e.g., SAMPLE) is a char
`acteri