`Reisman
`
`(10) Patent N0.:
`(45) Date of Patent:
`
`US 6,954,755 B2
`Oct. 11, 2005
`
`US006954755B2
`
`TASK/DOMAIN SEGMENTATION IN
`
`CONTROL
`
`Inventor: Richard Reisman 70 East 9"’ St New
`‘
`’ “
`"
`York’ NY (US) 10003
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U‘S'C' 154(b) by 0 days‘
`
`‘
`'
`
`Appl. No.: I0/410,352
`
`Filed:
`
`Apr. 10, 2003
`
`Prim’ P“bliC3ti0n Data
`Us 2003/0172075 A1 sop. 11: 2003
`
`1/2000 Culliss
`6,014,665 A
`2/V2000
`et al.
`6.029,192 A
`5/2000 Cohen . . . . . .
`6,067,539 A
`6,151,624 A * 11/2000 Teare et al.
`6’192’364 B1 *
`2/2001 Baclawski
`.
`r
`(9 3(
`" 00
`',
`323 ,
`)1 B1
`//2
`1 Davis et al
`6.460,036 B1
`10/2002 Herz ...... ..
`
`...... .. 707,/5
`709/206
`. . . .. 707/2
`.. 709/217
`707/10
`0 /3
`7 7,
`707/ 10
`
`..
`
`FOREIGN PATENT DOCUMENTS
`WO 99/19816
`41999
`WO 99/39275
`871999
`Vv'O 99/39280
`8/1999
`OTHER PUBLICATIONS
`
`IE. Kendall et al., “Information Delivery Systems: An
`Exploration of Web Pull and Push Technologies," Commu-
`nications of the Association for Information Systems, vol. 1,
`Art. 14, Apr. 1999.
`
`Related U.S. Application Data
`Division of application No. 09/651,243, filed on Aug. 30,
`2000.
`
`(commued)
`Primary Examiner—Mohammad Ali
`
`_
`Int. Cl.’ .............................................. .. G06F 17/30
`
`(57)
`
`ABSTRACT
`
`U_s_ C|_ ____________________________ __ 707/10; 707/3; 715/513
`Field of Search .................. H 707/1_1O7 1O0_104.17
`707/200405; 714/513; 709/217; 704/9
`References Cited
`_
`_
`US’ PATENT DOCUMENTS
`
`An apparatus for responding to a‘current user coniniand
`associated with one of a plurality of task/ctomains includes:
`a ditgitalisgorage déivicle that stcares cumulative feedback date;
`gat ere
`rom mu tip e users uring previous operations 0
`the apparatus and segregated in accordance with the plural
`ity of task/domains; a first digital logic device that deter-
`mines the current task/domain with which the current user
`
`command is associated; a second digital logic device that
`determines a current response to the current user command
`on the basis of that portion of the stored cumulative feedback
`data associated With the current task/domain; a first com-
`munieation interface that communicates to the user the
`current response; and a second communication interface that
`receives from the user current feedback data regarding the
`.
`.
`.
`.
`.
`current response. The current feedback data is added to the
`cumulative feedback data stored in the digital storage device
`and associated with the current task/domain.
`
`17 Claims, 6 Drawing Sheets
`
`HT ‘
`Pulse quay Iur
`0ispeutxed Tzxk
`smz
`iaiury
`5:21 J. r
`1 and ._ Usemask
`(Current,
`°
`°~““°"
`Associauon
`fa
`chm/ion
`Um
`Seek :0 Recognize
`Knorwrl Q-T
`As;o:i.1|mn5
`Combine U59!
`and Q»:
`lmbrme 1 "
`war .1
`tasks
`v
`For 13;».
`or
`Task) ., 2 ..
`Generate Us! 07 H n
`Present! orMnrc
`1»
`msrurmhcv
`tarMoreTasl<s
`mng
`me
`.7 Pm min,»
`Mowlmr Selecllom
`Feedback lo’
`rmAssocla 6 'aat<
`'51 4
`kecwd
`selection/Feedback
`
`/52174
`._
`
`9'19"/M9‘
`Ammz.;us
`
`A FI:edbdJ\ on ma
`0 Dpliznally on Task)
`
`5:
`
`Feedback
`\Ve1gh{mg
`Algorithm
`
`4,974,191
`5,224,205
`5,511,208
`5,715,395
`5,748,945
`5,751,956
`5,764,906
`5,794,050
`5,835,897
`5,855,020
`5,929,852
`5,974,444
`
`6,006,222>3>>>>>>>>>>3>>
`
`...... H 364/900
`
`11/1990 Amirghodsi et al.
`5/1993 Dinkin et a1_
`4/1996 Boyleg et a],
`2/1998 Brahson et al.
`395/500
`5/1998
`.~
`~ 395/209-33
`5/1993
`' 395'/225x43
`1'
`hl
`64,1998
`705/2
`........... ..
`11'1998 D
`39 70“
`8/1998 DH gen et a ’
`. 707/10
`12,1998
`345,335
`7/1999 Fisher et al.
`709/503
`10/1999 Konrad
`12/1999 Culliss ........................ .. 707/5
`5
`
`i
`
`.
`
`user
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-1
`
`
`
`US 6,954,755 B2
`Page 2
`
`OTHER PUBLICATIONS
`
`K.E. Kendall, “Artificial Intelligence and Gotterdamerung:
`The Evolutionary Paradigm of the Future,” The Data Base
`for Advances in Information Systems, vol. 27, No. 4, Fall
`1996, pp 99—115.
`S. Alter, et al., “A General, Yet Useful Theory Of Informa-
`tion Systems”, Communication of the Association for Infor-
`mation Systems, vol. 1, Art. 13, Mar. 1999.
`J. Klensin et al., “Domain Names and Company Name
`Retrieval,” RFC2345, Network Working Group Request for
`Comments: 2345, May 1998.
`1.0
`T. Bray et al., “Extensible Markup Language
`1998,”
`W3C
`Recommendation
`Feb.
`10,
`REC—xml—199802l0,
`http://www.w3.org/TR/1998/’
`REC—xml—19980210, Feb. 10, 1998.
`Anonymous, “XML: Enabling Next—Generation Web Appli-
`cations,” Microsoft Corporation, http://msdn.microsoft.
`com/archive/en—us/dnarxml/html/xmlwp2.asp?frame=true,
`Apr. 3, 1998.
`Anonymous, “UDDI Technical White Paper,” http://www.
`uddi.org/pubs/Iru_UDDI_Technical_White_Paper.pdf,
`Sep. 6, 2000.
`Anonymous, “electronic business XML (ebXML) Require-
`ments Specification—ebXML Candidate Draft Apr. 28,
`2000,” http://www.ebxml.org/specdrafts/RSV09.htm, Apr.
`28, 2000.
`“BizTalkTM Framework 1.0 Independent
`Anonymous,
`Document Specification,” BizTalk Enabling Software to
`Speak the Language of Business, Microsoft Corporation,
`Nov. 30, 1999.
`Philip Costa, “Navigating the Sea of XML Standards,” Giga
`Information Group, Dec. 14, 1999.
`“How does inference find work?” available at http://www.
`infind.com/about.html as of Nov. 11, 1999.
`Kathleen Hall, “Ask Jeeves Takes Direct Hit”, available
`http://WWW.gigaweb.com/'Content/GIB/’
`RIB—022000—00177.html, as of Feb. 19, 2000.
`“Latest engines go vertical in search of relevant informa-
`tion”, Harvard Computing Group Report, available at http://
`WWw.bettergettercom/betterg/demo/whitepaperjsp
`as of
`Dec. 20, 1999.
`“About the W3—Corporal’roject” available at http://www.es-
`sex.ac.UK/W'3c/corpus_ling/about.html
`as of Aug. 24,
`2000.
`
`“Language Translation” available at http://www—dse.doc.ic.
`ac.UK/~nd/surprise_97,fiournal/vol4/hks/trans.html
`as of
`Aug. 23, 2000.
`http://www—dse.doc.ic.ac.
`at
`available
`“Conclusion”
`UK/~nd/surprise_97/journal/vol4/liks/conclu.l1tn1l
`as
`of
`a
`Aug. 23, 2000.
`“Language Software,’ available at http://wWw—dse.doc.ic.
`ac.UK/~nd/surprise_97,jo11rnal/vol2/hks/lan_trans.html as
`of Aug. 23, 2000.
`“Symmetry Health Data Systems Achieves Patent ETG and
`‘Dynamic Time Window’ new industry standards.” available
`at http://www.Symmetry—health.com/PR_Patent.html as of
`Jun. 21, 2000.
`“Resource For The Semantic Web”, available at http://
`wwwsemanticweb.org/resources.html as of May 27, 2000.
`Web Design Issues “What a Semantic Can Represent”
`available at http://wWw.w3.org/Designlssues/RDFnot.html
`as of Dec. 8, 2000.
`
`Bob Metcalfe, “Web Father Berners—I_ee Shares Next—Gen-
`eration Vision of the Semantic Web.” InfoWorld vol. 21,
`Issue 21, May 24, 1999.
`Tim Berners—Lee “The Meaning of a Document—Axioms of
`Web Architecture,” available at http://www.w3.org/Desig11-
`Issues/Meaning.html, Dated 1999,
`last modified Jan. 24,
`2000.
`Alexander Chislenko, “Semantic Web Vision Paper” Version
`0.28, Jun. 29, 1997, available at http://\WvW.lucifer.com/~
`sasha/Articles/SemanticWeb. html.
`Tim Berners—Lee, “Semantic Web Road Map”, Sep. 1998,
`last modified Oct. 14, 1998, available at http://wwW.W3.org/'
`Designlssues/Semantic.html.
`Tim Berners—Lee, “Semantic Web as a Language of Logic”,
`1998,
`last modified Apr. 14, 2000, available at http://
`www.W3.org/Designlssues/Logic.html.
`“Semantic Search—The SHOE Search Engine”, available at
`http://wWw.cs.umd.edu/projects/plus/Sl-l()E/search/, as of
`May 29, 2000.
`“TelcordiaTM Latest Semantic Indexing Software (LS1):
`Beyond Keyword Retrieval,” available at http://lsi.re—
`search.telcordia.com/lsi/papers/execsum.html as of Dec. 11,
`2000.
`Jeff Heflin et al., “Searching the Web with SHOE”, Dept. of
`Computer Science, University of Maryland.
`“The SHOE FAQ”, available at http://wWw.cs.umd.edu/'
`projects/plus/SHOE/faq.html as of May 29, 2000.
`Dagobert Soergel, Review of WordNet, D—Lib Magazine,
`Oct. 1998, available at http://www.dlib.org/dlib/0ctober98/
`10bookreview.htn1l.
`Harold Boley, et al., “Tutorial 011 Knowledge Markup Tech-
`niques”, Aug. 22, 2000, available at http://www.seman-
`ticweb.org/knowmarktutorial/ as of May 29, 2000.
`Paula J. Hane, “Beyond Keyword Searching—Oingo and
`Simpli.com Introduce Meaning—Based Searching,” Dec. 20,
`1999, available at http://www.infotoday.com/newsbreaks/'
`nbl1220—2.htm.
`Sharon Cleary, “Simpli.com Uses Linguistics to Help Web
`Engines Do Better Searches”, Wall Street Journal Interactive
`Edition, Feb. 7, 2000.
`“Simplified Technology White Paper”, available at http://
`wvwvsimpli.com/search_white_paper.html.
`Reed Hellman, “A Semantic Approach Adds Meaning to the
`Web”, Computer, Dec. 1999.
`“What is 1 jump”, available at http://Www.1jump.com/as of
`Nov. 11, 1999.
`“Alexa FAQ’s”, available at http://www.alexa.com/whatis—
`alexa/faq.html as of Feb. 14, 1998.
`“1 jump for Windows features and benefits” available at
`http://www.1jun1p.com/featurebenefit.html as of Nov. 11,
`1999.
`
`“1 jump company and contact information”, available at
`http://wWw.1jump.com/corp.html as of Nov. 11, 1999.
`“Alexa User Paths”, available at http://www.alexa.com/
`whatisalexa/user_paths.html. as of Feb. 14, 1998.
`John F. lnce, “Searching for Profits: The pioneers had to
`expand to make money. Will the next wave fare any better?”
`Upside, May 2000 available at http://WwW.upsidetoday.com.
`Jim Rapoza, “Alexa’s Theory of Relativity Filtering ana-
`lytical algorithms link to Web sites—relevant or not”, PC
`Week Labs available at http://www.Zdnet.com/pcweek/re-
`views/0818/18alex.html as of Feb. 14, 1998.
`“1 jump Help menu”, available at http://www.alexa.con1.
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-2
`
`
`
`US 6,954,755 B2
`Page 3
`
`BAA 00-07 Proposer Information Pamphlet: Agent Based
`Computing, available
`at http:/'/vvwW.darpa.mil/iso/ABC/’
`BAA0007PIP.htm.
`
`GlobalBrain.net, Ilome Page, Background and Technology
`from GlobalBrain.net Web site Www.GlobalBrain.net, Jun.
`1999.
`
`“Real Name Temporarily Suspends Registration of Gener-
`ics”, from The Search Engine Report, Jan. 4, 2000, from at
`http://searchenginewatch.internet.com/sereport/00/01-real-
`names.html.
`
`Tim Bray, “RDF and Metadata”, from http://www.xml.com/
`xml/pub/98/06/rdf.html, 1998.
`Ora Lassila, “Web Metadata: AMatter of Semantics”, IEEE
`Internet Computing, Jul.-Aug. 1998.
`Elizabeth Gardner, “Hollywood Marketers Debate Idea of
`URL for Every Movie”, WebWeek, Jan. 19, 1998.
`“Netword Receives Patent for Internet Keyword System”,
`Netword.com Press Release, Jun. 16, 1999.
`“Direct Hit Receives Funding From Draper Fisher Jarvet-
`son”, Directllitcom Press Release, May 15, 1998.
`“Direct Hit Signs Deal With Wired Digital’s Hot Bot for
`Popularity Engine”, DirectHit.com Press Release, Aug. 19,
`1998.
`
`DirectHit.com, Company & Background Articles and Fre-
`quently Asked Questions, from http://system.direchit.com/,
`Oct. 1998.
`
`“Technology Overview” from DirectHit Web Site, Www.di—
`recthit.com, printed Jun. 1999.
`“Centraal Corporation Redefines Internet Navigation”, Press
`release from realnames.com, Mar. 12, 1998, http://company-
`.realnames.com/iwreleaseasp.
`“Centraal Corporation FAQ”, from http://co111pany.realna-
`mes.com/FAQ.asp, Mar. 1998.
`Michael Tchong, “Centraal Debuts”, Mar. 11, 1998 ICONO-
`CAST, from http://company.realnanies.con1/iconocast.asp.
`“Access, Searching and Indexing of Directories (asid)”, Jan.
`1998, from http:,4/Www.ietf.cnri.reston.va.us/html.charters/
`asid-charterhtml.
`
`“GoTo.com, The First Ever Market-Driven Search Direc-
`tory”, GoTo.com Press Release, Feb. 21, 1998, from http://
`www.goto.com/release.html.
`“URL Expansion Proposal”, UseNet Thread, Jan. 1996.
`Elizabeth Gardner, “Dislike Your URL? Now You Can
`Register a ‘NetWord’”, WebWeek, Aug. 18, 1997.
`“Netword LLC Receives Notice of Allowance”, Netword.
`com press release, Dec. 9, 1997.
`“Internet Keywords Give Consumers Direct Access to
`Online Resources”, Netword.com Press Release, May 12,
`1997.
`
`“VVhy Use Networds?”, Netword.con1 Web Site, Company
`Profile, FAQS, Feb. 1998 from http://www.netword.com.
`R. Fielding, “How Roy Would Implement URNs and URCs
`Today”, Internet Draft of the Internet Engineering Task
`Force (Il:"l‘F), Jul. 7, 1995.
`J. Klensin et al., “Domain Names and Company Name
`Retrieval’, Internet Draft of the Internet Engineering Task
`Force (IETF), Jul. 29, 1997.
`K. Solhns, “Architectural Principles of Uniform Resource
`Name Resolution”, Informational Memo, Internet Society,
`Jan. 1998.
`
`Tim Berners-Lee, “Web Architecture from 50,000 feet.”,
`from http://www.w3.org/Designlssues/Architecture.html.,
`Sep. 1998.
`S. Kille, “Using the OSI Directory to Achieve User Friendly
`Naming”, Request For Comments: 1781, Internet Society,
`Mar. 1995.
`“Global Brain To 0 er Profile Searching”, The Search
`Engine Report, Nov. 4, 1998.
`S. Chakrabarti, “Mining The Web’s Link Structure”, Com-
`puter (IEEE), Aug. 1999.
`Julie Pitta, “!&#$%.com”, Forbes, Aug. 23, 1999.
`J. Zittrain, “Keyword: Obsolete”, Wired, Sep. 1998.
`Scot Finnie, “You Can Get Satisfaction: Try IE5”, Windows
`Magazine Online, Jun. 1, 1999, Issue: 1006.
`“Internet Explorer 3.0 for Windows 3.1 and NT 3.51: Tips
`and Tricks”, on Microsoft Website, 1997.
`“How To Search the Internet from the Address Bar In
`Internet Explorer”, Microsoft Article ID: Q221754, Wwvv-
`.microsoft.com, Jul. 17, 1999.
`“Auto Search”, from Microsoft Website, Mar. 18, 1999.
`“Microsoft and Yahoo! Make Web Searches Easier For
`Microsoft Internet Explorer 3.0 Users Auto search to Feature
`Yahoo! Search Capabilities”, Microsoft Media Alert, Aug.
`13, 1996.
`Ask Jeeves sample query, from Www.askjeeves.com, Dec.
`1999.
`Ralph Swich et al., “Resource Description Framework
`(RDF)” and “Frequently Asked Questions about RDF”,
`W3C Technology and Society Domain, printed Sep. 30,
`1998 from http://www.w3.org/RDF and http://wwW.W3.org/
`RDF/FAQ.
`“Why Use Googlel Beta” and “Googlel Beta Help” from
`http://wWw.google.com, 1999.
`“What
`is Ask Jeeves”,
`from http://Www.askjeeves.com/'
`docs/about/whatisaksjeeves.html, 1999.
`“NBC’s Snap.Com and GlobalBrain.Net Unveil Sophisti-
`cated New Technology And Services to Harness the Brain
`Power of Internet Users”, http://wwwglobalbrain.net/html/
`release.html, Jun. 14, 1999.
`GlobalBrain.net, Corporate-Technology, at http://WvWv.glo—
`balbrain.net/html/technology.html, 1998-99.
`M. MacLachlan, “Keywords Threaten Domain Name Sys-
`tem”, TechVVeb, Nov. 9, 1998.
`M. MacLachlan, “Netscape to Release Communicator 4.5
`Beta”, TechWeb, J1In. 17, 1998.
`“Centraal Corporation: Company Background”, from http://
`company.realnames.comfl3ackgrounder.asp, Mar. 1998.
`Amy Dunlop, “Plotting an Internet Address Revolution”,
`Internet World, Mar. 12, 1998.
`Alex Lash, “A Simpler Net Address System”, CNET
`NEVVS.COM, Mar. 12, 1998.
`“Startup O ‘ers Net Addresses Sans Dots, Dashes”, Reuters,
`Mar. 13,
`998, from http://www.zdnet.com/zdnn/content/'
`reut/0312/293902.html.
`Chris Sherman, “What’s New With Web Search”, onlineinc-
`.com/onlinemag, pp. 27-31.
`Jeff Pemberton, “Google Raises the Bar on Search Technol-
`ogy”, Organizing the World’s Information, onlineinc.con1/
`onlinemag, pp. 43-46.
`Greg. R. Notess, “The Never-Ending Quest Search Engine
`Relevance”, May/Jun. 2000, onlineinc.com/onlinemag, pp.
`35-38.
`Susan Feldman, “Find What I Mean, Not What I Say”,
`May/Jun. 2000, www.onlineinc.con1/onlineniag, pp. 49-56.
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-3
`
`
`
`US 6,954,755 B2
`Page 4
`
`“Up and Cominyg Search Technologies”, May/Jun. 2000,
`onlineinccom/onlinemag, pp. 75—77.
`“A.COMVersation about Internet Search Engines”, May 27,
`2000,
`http://WWW.digitalmass.com/news/packages/click/’
`roundtablelhtml.
`Shumeet Baluja, Vibhu Mittal, Rahul Sukthankar, “High
`Performance Named—Entity Extraction”, http://WWw.ph.tn—
`.tudelft.ril/1’Rlnfo/reports/msg00431.html, Jun. 29, 1999,
`(abstract).
`Boris Chidlovskii et al., “Collaborative Re—Ranking of
`Search Results”, AAAI—2000 Workshop on Al for Web
`Search, Online, Jul. 30, 2000, XP002250910.
`
`Alton—Scheidl R. et al., “Select: Social and Collaborative
`Filtering of Web Documents and News”, Proceedings of the
`5”‘ ERCIM workshop on user interfaces for all: user—tai-
`lored
`information
`environments, Online, Nov.
`28,
`l999—Dec. l, 1999, XP0022509ll.
`
`Andreas Paepcke et al., “Beyond Documents Similarity:
`Understanding Value—Based Search and Browsing Tech-
`nologies”, Sigmond Record, ACM, USA, Online, vol. 29,
`No. 1, Mar. 2000, pp. 80-92, XP002250912.
`
`* cited by examiner
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-4
`
`
`
`U.S. Patent
`
`Oct. 11,2005
`
`Sheet 1 of 6
`
`US 6,954,755 B2
`
`World Wide
`
`Web
`
`FIG. 1A
`
`Processing/Learning
`
`101
`
`Data Base
`
`-Index
`Information
`
`0 Feedback
`Information
`102
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-5
`
`
`
`U.S. Patent
`
`Oct. 11,2005
`
`Sheet 2 of 6
`
`US 6,954,755 B2
`
`Multi-User Feedback
`
`Learning
`Processing
`
`. mdex
`0 Feedback
`
`Response (a,1)
`
`Service
`
`(Search, Mapping,...)
`
`(a, n)
`
`/ \
`
`= Query Item
`
`(Query or Request Item, User Case/|nstance)
`
`: Query Response
`
`Feed back Results
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-6
`
`
`
`U.S. Patent
`
`Oct. 11,2005
`
`Sheet 3 of 6
`
`US 6,954,755 B2
`
`Case 1 — Ask User What Task
`
`Parse Query
`for Task Domain = i
`
`(Specifies Task/
`Domain on
`
`Query Form)
`
`Do Look-up
`Do Logic Combinations
`Rank by Feedback Rating
`- All for Case of T=i
`
`lndex D313
`& F€€db3Cl<
`
`.
`
`Present to User
`
`Monitor Selections
`(and Other) Feedback
`
`Record Selections
`(& Other) Feedback
`
`Feedback
`Weighting
`Algorithms
`
`QT=i
`— Query Task = i
`
`RT=i
`— List of Hits
`
`FT=,'
`_ Hits Sdeaed
`Plus Other
`Feedback
`
`*0 Use semantics information and vocabulary to define tasks.
`— Match task specifications in terms of semantics/vocabularies.
`
`*0 Segment data by task as feedback is obtained.
`— Start with all data at low probability setting, then adjust as
`feedback is obtained.
`
`FIG. 2
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-7
`
`
`
`U.S. Patent
`
`Oct. 11,2005
`
`Sheet 4 of 6
`
`US 6,954,755 B2
`
`5
`
`QT=?
`
`Parse Query for
`
`Unspecified Task
`
`S200
`
`S202
`
`F I G ' 3
`
`Seek User History
`(Current, Prior) and
`Other Data on
`
`Task Behavior
`
`Userfrask
`Association
`
`Seek to Recognize
`Known Q-T
`Associations
`
`Combine User
`
`and QUEVY
`l“l0Vmatl0"I T0
`lnfer Likely Tasks
`
`For Each of 1 or
`More Likely
`Tasks I1, i2...
`Generate List of Hits
`
`Query/Task
`Associations
`
`=?
`
`— Query for
`Unidentified Task
`RTZI.
`- Hits for
`Inferred Task
`FT=’
`- Feedback on Hits
`(+ Optionally on Task)
`
`T ,:1
`
`7
`
`10
`
`Present 1 or More 0
`
`Hits for Each of
`1 or More Tasks
`
`Index Data
`& Feedback
`
`(Depending
`
`on Probability) “
`
`T:1
`
`Monitor Selection/
`Feedback for
`
`Hit and
`Associated Task
`
`Record
`Selection/Feedback
`
`5214
`
`Feedback
`Welghllng
`Algorithm
`
`5218
`
`For
`Hit _ Quer
`For
`Task — uer
`For
`User _ Query
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-8
`
`
`
`U.S. Patent
`
`Oct. 11,2005
`
`6cl05tCehS
`
`US 6,954,755 B2
`
`
`
`c_mEoD\v_,£#-m_n_Emmxwvc_
`
`
`
`
`
`E_mEoQ\v_mm:A.3O_
`
`
`b.__5mno.n___m.><xmmzoumEmam.&_£a£UCJOQEOUm0
`
`EwEm_wm_w:_mW
`
`3.0_:_£:23:_mmEO5
`
`CVOvc:omEoU
`
`EOUCJOQEOUEOEOm_wc_m
`
`Ll
`
`EO3OQ0EO3OUCJOQEOUSOEOw_mc_m
`
`
`
`mxmmhC>>OCv_
`
`
`
`mxmmhc>>o5_cD
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-9
`
`
`
`U.S. Patent
`
`Oct. 11,2005
`
`Sheet 6 of 6
`
`US 6,954,755 B2
`
`no feedback -->
`
`decrement raw score by factorol.
`(can be zero)
`
`for all, increase experience level score by E faclorcl.
`
`increment raw score by factors’.
`
`increment raw score by factory”,
`
`decrement raw score by factoro-
`(can be zero)
`'
`
`Decrement
`raw score
`by factorwi
`
`Increase experience level E factor
`
`P5’
`
`E Factor
`
`Pci
`
`FIG. 5B
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-10
`
`
`
`US 6,954,755 B2
`
`1
`TASK/DOMAIN SEGMENTATION IN
`APPLYING FEEDBACK TO COMMAND
`CONTROL
`
`CROSS-REFERENCE TO RELATED
`APPLICATION
`
`This application is a division of application Ser. No.
`09/651,243, filed Aug. 30, 2000.
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`
`The present invention is directed to an improved method
`and apparatus for the utilization of user feedback partic11-
`larized to a specified or inferred task, to improve the ability
`to respond accurately to user commands.
`2. Description of the Related Art
`The development of the World Wide Web (hereinafter, the
`Web), a subset of the Internet that includes all connected
`servers olfering access to Hypertext Transfer Protocol
`(HTTP) space, has greatly increased the popularity of the
`Internet in recent years. To navigate the Web, browsers have
`been developed that enable a user of a client computer
`connected to the Internet to download Web pages (i.e., data .
`files on server electronic systems) written in I-Iyper'l'ext
`Mark-Up Language (HTML). Web pages may be located on
`the Web by means of their electronic addresses, known as
`Uniform Resource Locators (URLs), which uniquely iden-
`tify the location of a resource (web page) within the Web.
`Each URL consists of a string of characters defining the
`protocol needed to access the resource (e.g., HTTP), a
`network domain name, identification of the particular com-
`puter or1 which the resource is located, and directory path
`information within the computer’s file structure. The domain
`name is assigned by Network Solutions Registration Ser-
`vices after completion of a registration process.
`Search engines have been developed to assist persons
`using the Web in searching for web pages that may contain
`useful information. One type of search engine, exemplified
`by AltavistaTM, Lycos®, and Hotbot®, uses search
`programs, called “web crawlers”, “web spiders”, or
`“robots”, to actively search the Web for pages to be indexed,
`which are then retrieved and scanned to build indexes. Most
`often this is done by processing the full text of the page and
`extracting words, phrases, and related descriptors (Word
`adjacencies, frequencies, etc.) This is often supplemented
`by examining descriptive information about the Web docu-
`ment contained in a tag or tags in the header of a page. Such
`tags are known as “metatags” and the descriptive informa-
`tion contained therein as “metadata”. Another type of search
`engine, exemplified by Yahoo!® (Www.yahoo.com), does
`not use web spiders to search the web. Instead, these search
`engines compile directories of web sites that editors deem to
`be of interest to the users of the service and the search is
`performed using only the editor-compiled directory or direc-
`tories. Both types of search engines output a listing of search
`results believed to be of interest to the user, based upon the
`search term or terms that the user input to the engine.
`Recently,
`se arch engines such as DirectHitTM
`(www.directhit.com) have introduced feedback and learning
`techniques to increase the relevancy of search results.
`DirectIIitTM purports to use feedback to iteratively modify
`search result rankings based on which search result links are
`actually accessed by users. Another factor purportedly used
`in the DirectHitTM service in weighting the results is the
`amount of time the user spends at the linked site. The theory
`
`,
`
`2
`behind such techniques is that, in general, the more people
`that link on a search result, and the longer the amount of time
`they spend there, the greater the likelihood that users have
`found this particular site relevant to the entered search terms.
`Accordingly, such popular sites are weighted and appear
`higher in subsequent result lists for the same search terms.
`The Lycos® search engine (wWvv.lycos.com) also uses
`feedback, but only at the time of crawling, not in ranking of
`results. In the Lycos® search engine, as described in U.S.
`Pat. No. 5,748,954, priority of crawling is set based upon
`how many times a listed web site is linked to from other web
`sites. The Google® search engine (www.google.com) and
`IBM®’s Clever system use such information to rank pos-
`sible hits for a search query.
`in
`Two of the important techniques available to assist
`locating desired Web resources will be referred to herein-
`after as discovery searching and signifier mapping. In dis-
`covery searching, a user desires all, or a reasonable number
`V of, web sites highly relevant to entered search terms. In such
`searching, the criterion for a successful search is that as
`many of the highly relevant web sites as possible be dis-
`covered and presented to the user as prominently as possible.
`In signifier mapping, a user enters a guessed name or
`signifier for a particular target resource on the Web. The
`criterion for a successful signifier mapping is that the user is
`provided with the URL of, or connected to,
`the specific
`target resource sought.
`One attempt to provide the ability to map a signifier, or
`alias, to a specific URL utilizes registration of key words, or
`aliases, which when entered at a specified search engine,
`will associate the entered key word with the URL of the
`registered site. This technique is implemented commercially
`by NetWord® (www.netword.com). However,
`the Net-
`Word® aliases are assigned on a registration basis, that is,
`owners of web sites pay NetWord a registration fee to be
`mapped to by a particular key word. As a result, the URL
`returned by NetWord may have little or no relation to what
`a user actually would be looking for. Another key word
`system, RealNames (www.realnames.com), similarly allows
`web site owners to register, for a fee, one or more “Real-
`Names” that can be typed i11to a browser incorporating
`RealNames’ software, in lieu of a URL. Since RealNames
`also is registration based, there once again is no guarantee
`that the URL to which is user is directed will be the one he
`intended.
`
`_
`
`Related to search techniques are preference learning and
`rating mechanisms. Such mechanisms have been used, for
`example, in assessing customer satisfaction or in making
`recommendations to users based on what customers with
`similar interests have purchased in the past. In existing
`preference learning and rating mechanisms, such as collabo-
`rative filtering (CF) and relevance feedback (RF), the objec-
`tive is to evaluate and rank the appeal of the best n out of in
`sites or pages or documents, where none of the n options are
`necessarily known to the user in advance, and no specific
`one is presumed to be intended. It is a matter of interest in
`any suitable hit, not intent for a specific target. Results may
`be evaluated in terms of precision (whether “poor” matches
`are included) and recall
`(whether “good” matches are
`omitted).
`A search for “IBM” may be for the IBM® Web site, but
`it could just as likely be for articles about IBM® as a
`company, or articles with information on IBM®-compatible
`PCs, etc. Typical searches are for information about the
`search term, and can be satisfied by any number of “rel-
`evant” items, any or all of which may be previously
`
`AMERICAN EXPRESS v. METASEARCH
`CBM2014-00001 EXHIBIT 2015-11
`
`
`
`US 6,954,755 B2
`
`3
`unknown to the searcher. In this sense there is no specific
`target object (page, document, record, etc.), only some open
`ended set of objects which may be useful with regard to the
`search term. The discovery search term does not signify a
`single intended object, but specifies a term (which is an
`attribute associated with one or more objects) presumed to
`lead to any number of relevant items. Expert searchers may
`use searches that specify the subject indirectly,
`to avoid
`spurious hits that happen to contain a more direct term. For
`example, searching for information about the book Gone
`With The Wind may be better done by searching for Mar-
`garet Mitchell, because the title will return too many irrel-
`evant hits that are not about the book itself (b11t may be
`desired for some other task).
`In other words, the general case of discovery searching
`that typical search engines are tuned to serve is one where
`a search is desired to return some number, n, of objects, all
`of which are relevant. A key performance metric, recall, is
`the completeness of the set of results returned. The case of
`a signifier for an object, is the special case of n=1. Only one
`specific item is sought. Items that are not intended are not
`desired—their relevance is zero, no matter how good or
`interesting they may be in another context. The top
`DirectHitTM for “Clinton” was a Monica Lewinsky page.
`That is probably not because people searching for Clinton '
`actually intended to get that page, but because of serendipity
`and temptation—which is a distraction, if what we want is
`to find the White House Web site.
`
`Many self-contained document search systems, such as
`LexisNexis® and Medline® have long exploited semantic
`metadata, machine—readable information as to the content
`and type of an associated document available on a network,
`to enable users to more e ‘ectively constrain their searches.
`Thus in searching for the Times review of Stephen King’s
`new book, a user might explicitly search for “pub—name=
`Times and content—type=review and author=King.” Search
`systems have enabled searchers to exploit this explicitly in
`their query language, and attempts at natural
`language
`searching have sought to infer such semantics. However,
`because of the small user population of such systems, there
`has been no attempt to utilize feedback to improve search
`results in such systems.
`Further, it has been recognized that different people using
`the same search terms when searching may expect or desire
`different results. For example, in the context of discovery
`searching, it has been postulated that when a man enters the
`search term “flowers” in a search engine, he is likely to be
`interested in ordering flowers, whereas when a woman
`enters the same search term, she is more likely to be seeking
`information about flowers. Some currently existing search
`engines, such as DirectHitT"" (www.directhit.com) and Glo-
`balBrainTM (www.globalbrain.net), purport to take gender
`and other demographic data, such as country, race, and
`income,
`into account in formulating results for searches.
`However, prior art search techniques such as these do not
`take into account the type of task/domain the user is working
`in when deciding what results would be desired, nor do the
`techniques utilize iterative learning based on experiential
`data or feedback particularized to the task/domain.
`There is therefore a need to provide a method for cali-
`brating the use of feedback in searching and other
`command-responsive control
`techniques, such as robot
`control, so as to correlate accumulated user feedback with
`the particular task/domain being performed by the user.
`There also is a need to develop a technique of using
`semantic metadata for use in search systems having a large
`
`4
`user population to assist in determining the taslddomain of
`the user and then to use feedback specific to that task/'
`domain.
`
`SUMMARY OF THE INVENTION
`
`In view of the above-mentioned deficiencies of the prior
`art,
`it is an object of the present invention to provide a
`method of utilizing heuristic, adaptive feedback-based
`techniques, while at the same time customizing use of the
`feedback to particular tasks or domains. According to one
`advantageous aspect of the present invention, in applying
`learning techniques to searches or signifier mapping, or to
`more general control techniques, particularized learning and
`experiential data gathered during previous iterations of the
`same or similar tasks is used, and feedback gathered from
`different types of tasks is ignored, or at least given less
`weight, when formulating responses to user commands.
`Note that the term “task” is used to refer generally to the
`concept of a specific task, the term “domain” is used to refer
`generally to the concept of a specific domain of discourse,
`' and the term “task/domain” is used to refer to a task and/or
`a domain.
`In accordance with the above objects, in accordance with
`one aspect of the present invention, there is provided an
`apparatus for responding to a current user command asso-
`ciated with one of a plurality of tasks. The apparatus
`comprises: means for storing cumulative feedback data
`gathered from multiple users during previous operations of
`the apparatus and segregated in accordance with the plural-
`ity of tasks; means for determining the current task with
`which the current user command is associated; means for
`determining a current response to the current user command
`on the basis of that portion of the stored cumulative feedback
`data associated with the current task; means for communi-
`cating to the user the current response; and means for
`receiving from the user current feedback data regarding the
`current response. The current feedback data is added to the
`cumulative feedback data stored in the storing means and
`associated with the current task.
`In accordance with another aspect of the present
`invention, there is provided a method for responding to a
`current user command associated with one of a plurality of
`tasks. The method comprises the steps of: determining the
`current task with which the current user command is asso-
`ciated; determining a current response to the current user
`command on the basis of previously gathered and stored
`feedback data associated with the current task; commun