throbber
UNITED STATES PATENT AND TRADEMARK OFFICE
`
`______________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`______________
`
`AMERICAN EXPRESS COMPANY, AMERICAN EXPRESS TRAVEL
`RELATED SERVICES COMPANY, INC., EXPEDIA, INC., HOTELS.COM LP,
`HOTELS.COM GP, LLC, HOTWIRE, INC., ORBITZ WORLDWIDE, INC.,
`PRICELINE.COM, INC., TRAVELOCITY.COM LP, and YAHOO! INC.
`
`Petitioner,
`
`
`v.
`
`
`METASEARCH SYSTEMS, LLC,
`Patent Owner
`
`______________
`
`Case CBM2014-00001
`Patent 8,326,924
`
`______________
`
`SUPPLEMENTAL DECLARATION OF GARY LIAO IN SUPPORT OF
`PETITION FOR POST-GRANT REVIEW OF A COVERED
`BUSINESS METHOD UNDER 35 U.S.C. § 321 AND AIA, § 18
`
`
`
`
`
`
`
`
`

`

`
`
`1. My name is Gary Liao. I am currently President and owner of
`
`WhereExactly, Inc., an Oregon Corporation founded in 2005, providing software
`
`consulting, computer and software forensics, and location specific advertising
`
`services. I have a Bachelor of Science degree in Electrical Engineering from
`
`University of California, San Diego (1988), and a Master of Business
`
`Administration degree from Portland State University (1999).
`
`2.
`
`I have spent the last 25 years working either as a software engineer, or
`
`as a consultant focused on analyzing software. From 1988 to 2005, I worked for a
`
`range of companies as a software engineer including Advanced Micro Devices,
`
`The Scripps Research Institute, Biosym Technologies, Charles Schwab & Co.,
`
`Inc., Integrated Surgical Systems, Inc., DAT Services, Inc., Intel Corp., Step
`
`Technology, Webridge, Inc., and SoftSource Consulting. In 2005, I started my own
`
`internet advertising and consulting company, WhereExactly, Inc.
`
`3.
`
`In 1996, I designed and implemented a message oriented middleware
`
`protocol for a distributed database client-server Internet based application.
`
`Through 1999, I was the technical lead providing architectural guidance and/or
`
`software developer for e-commerce stores including model.com, clique.com,
`
`forbes.clique.com, animalfairboutique.com, skinet.clique.com, gear.com,
`
`danner.com, and 800.com. These various websites provided e-commerce
`
`functionality using various technologies including Microsoft Site Server
`
`
`
`Page 1
`
`

`

`
`
`Commerce Edition, Microsoft SQL Server, and Cybercash for credit card
`
`transactions. I was also the technical lead and architect for the Oregon Department
`
`of Fish and Wildlife (ODFW) Point of Sale (POS) system. This project enabled the
`
`sale of all Oregon Department of Fish and Wildlife fishing and hunting licenses,
`
`tags, parking permits, and raffle tickets throughout the state of Oregon. This
`
`system utilized a 3-tier client-server architecture over the Internet with each client
`
`computer utilizing an Internet Browser at all Point-of-sale locations to conduct the
`
`transactions to a central server. Through 2004, I joined an Internet startup,
`
`Webridge. As the name suggests, Webridge provided technology using the Web
`
`(Internet) to create a Bridge between businesses and consumers, and between
`
`businesses and businesses. Among other tasks, I was tasked to incorporate various
`
`technologies into the product solutions including Microsoft Commerce Server
`
`2000 and Biztalk Server. Webridge, as well as many other technology companies
`
`of the early 2000’s, struggled to address the challenges of scale; how to increase
`
`the number of client computers and still provide adequate performance. So, I am
`
`well aware of the technical challenges for accomplishing scale at that time.
`
`Currently, I provide Internet advertising and consulting services. I provide Search
`
`Engine Optimization (SEO) and other consulting services to various websites and
`
`currently provide software development consulting services to Johns Hopkins
`
`University School of Medicine, Johns Hopkins University School of Public Health,
`
`
`
`Page 2
`
`

`

`
`
`Huron Consulting Group, and Ochsner Health Systems, and I provide litigation
`
`consulting services for a number of clients as well.
`
`4.
`
`Based on the above experience and qualifications, I have a solid
`
`understanding of the knowledge and perspective of a person of ordinary skill in
`
`this technical field in 1999-2001.
`
`5.
`
`I am being compensated for my time spent in connection with this
`
`matter at my standard consulting rate of $150/hr. I have no financial interest in the
`
`outcome of the related litigations or this proceeding.
`
`
`
`Knowledge Broker was a Metasearch Engine
`
`6.
`
`The opinions that follow in this section are responsive to the Patent
`
`Owner’s contention that Knowledge Broker was not a metasearch engine.
`
`Knowledge Broker was a metasearch engine by definition using either the Board’s
`
`Preliminary Construction or the Patent Owner’s Proposed Construction for
`
`metasearching. The definition I used for metasearch engine is an application or
`
`other instructions on a hardware device that performs metasearching as defined
`
`below (CBM2014-00001 – Patent Owner’s Response to Petition, p. 25):
`
`
`
`Page 3
`
`

`

`
`
`Board’s Preliminary
`Construction
`
`[sending] an unstructured keyword
`query or queries to plural hosts, as
`requested by a user, and grouping,
`sorting, and returning to the user the
`results received from each host
`
`
`Patent Owner’s Proposed
`Construction
`
`
`sending at least one search query
`to plural hosts, and returning the results
`received from each host
`
`
`
`7.
`
`Exhibit 1006, “Constraint-based Information Gathering for a Network
`
`Publication System,” and Exhibit 1007, “Agent-Based Document Retrieval for the
`
`European Physicists: A Project Overview,” together describe both a “framework”
`
`with varied applications as well as a particular example of an “application
`
`environment, namely the Physicists Network Publishing System (PNPS), that
`
`serves as a testbed for the knowledge broker framework.” (Ex 1006, p. 4.) Also,
`
`in this example application, the “CBKB system acts as a front-end to a printing-on-
`
`demand system, originally called Physicists Network Publishing System (PNPS).”
`
`(Ex 1007, p. 4.) In part, it is this exemplary application environment, including but
`
`not limited to the web query interface, brokers, wrappers, constraints, data
`
`repositories (aka search engines), and integration with print service backend, which
`
`is also described as “a uniform meta-search interface with clear semantics,
`
`developed on top of different search engines” (Ex 1007, p. 4), which I refer to as
`
`“Knowledge Broker,” and which was a metasearch engine. These papers, Exhibits
`
`1006 and 1007, describe this specific example of Knowledge Broker with
`
`
`
`Page 4
`
`

`

`
`
`particular data repositories representing a broad range of heterogeneous data
`
`sources and a specific print service backend, but these papers also describe other
`
`possible applications of the Knowledge Broker framework, such as bargain finding
`
`and virtual catalog systems. (Ex. 1006, p. 2). This broader framework and variety
`
`of applications described in these references is encompassed by what I refer to as
`
`the “Knowledge Broker” metasearch engine as well.
`
`8.
`
`Knowledge Broker accepts a search query request from a user.
`
`Exhibit 1006, p. 10, Figure 2: “Form-based dialogue for request specifications,”
`
`shows an example of a user search query request. Exhibit 1006, p. 10, continues:
`
`“The user-friendly request specification is easy to learn and does not require
`
`background information on the internally used constraint format, the request
`
`syntax of particular data repositories nor on their internal data format.”
`
`9.
`
`Knowledge Broker then decomposes and transforms the query request
`
`as necessary and then sends the request real-time (Ex. 1007, p. 12) to plural unique
`
`hosts (aka data repositories).
`
`“After the user has specified the request (via the form-based dialogues, as
`
`illustrated in Fig. 2), the specifications are automatically transformed into a
`
`corresponding constraint structure. The generated constraint-based query is
`
`then communicated to the generic broker, which further initiates all steps
`
`
`
`Page 5
`
`

`

`
`
`(decomposition of
`
`requests
`
`into
`
`subrequests,
`
`threshold
`
`checks,
`
`recomposition of answers, etc.) to answer the specified request.
`
`Apart from this, the homogenization of the request specification is
`
`achieved by presenting a set of possibilities which depend only on the search
`
`domain, no matter which of the included data repository and external search
`
`tools are contacted to answer the request (e.g. fields for author, title, date,
`
`etc., for bibliographical searches). This is achieved by transforming the
`
`request into a possibly more general, less precise, external search format of
`
`the corresponding external data repository, and by later filtering out the
`
`inappropriate results in the constraint solver.” (Ex. 1006, pp. 10-11.)
`
`10. Knowledge Broker provides the service of searching plural unique
`
`hosts. Exhibit 1007, p. 8, describes the specific implementation of Knowledge
`
`Broker as follows:
`
`
`“The CBKB system already serves as a testbed for research activities in the
`
`area of digital libraries at RXRC Grenoble. Our second project was to
`
`respond to the physics community’s need for integrated services by
`
`connecting PhysDoc, Physdis, and the Augsburg mirror of LANL.
`
`
`These repositories are distinct and heterogeneous enough to test the ability
`
`of CBKB to integrate new backends.”
`
`
`
`Page 6
`
`

`

`
`
`
`These repositories are the plural unique hosts also referred to in Exhibit
`
`1007 as “different search engines.” (Ex. 1007, p. 4.) These repositories represent a
`
`broad range of heterogeneous data sources. The references disclose Knowledge
`
`Broker searching: heterogeneous backend data sources (Ex. 1006, pp. 2, 9),
`
`including “several document databases” (Ex. 1006, p. 4); different external archive
`
`systems, each managed by its own management software (Ex. 1006, p. 4, Fig. 1;
`
`Ex. 1007, p. 8); different kinds of indexes and search tools (Ex. 1006, p. 7);
`
`interdisciplinary data sources (Ex. 1006, p. 11); data repositories with SQL
`
`interfaces, or http interfaces, or Z39.50 interfaces (Ex. 1006, p. 13); data
`
`repositories indexed by the Harvest search and index tool (Ex. 1007, pp. 4, 7); data
`
`sources “no matter which” search engine or engine they use (Ex. 1007, pp. 4, 6, 9);
`
`Webcrawlers (Ex. 1007, p. 11); different brokers and databases (Ex. 1007, pp. 5,
`
`6); and distributed brokers, repositories and document servers (Ex. 1007, p. 13).
`
`And, these references state that the Knowledge Broker architecture and model
`
`apply generally, not just to the exemplary scientific document domains mentioned
`
`in the paper. (Ex. 1007, p. 1.)
`
`11. Knowledge Broker receives search results from plural unique hosts in
`
`response to its search queries sent to those hosts. Exhibit 1007, p. 6, states the
`
`following:
`
`
`
`Page 7
`
`

`

`
`
`“What is missing, is distributed joint-functionality among the different data
`
`repositories, and a sophisticated way of processing the search results.
`
`To answer this need we have designed and implemented a heterogeneous
`
`broker, which logically combine heterogeneous information retrieved from
`
`other brokers, on-line repositories, and databases (Borghoff and Schlicter
`
`1996), in reply to a physicist’s search query.”
`
`12. This clearly describes receiving search results from plural unique
`
`hosts. Further, Exhibit 1007, Section 3 “Current Implementation,” p. 12 describes
`
`the results in more detail stating how a wrapper is created for each of the plural
`
`unique hosts (aka server) and then Knowledge Broker “queries the server and
`
`receives the results in html-format.” This is described below:
`
`“Connecting an external server to the system is done by analyzing its search
`
`interface, then writing a wrapper for it. This wrapper receives the
`
`description of the constraints corresponding to the query, translates them
`
`into the query-string required by the search script, verifies that the indicated
`
`fields are accepted by the server and provides default values for required
`
`fields not specified by the user. It then queries the server and receives the
`
`results in html-format. Finally it parses the results and translates them into
`
`the constraint format accepted by the CBKB system.”
`
`
`
`Page 8
`
`

`

`
`
`13. Knowledge Broker groups, sorts and returns to the user the search
`
`results received from the plural unique hosts. “The final output can be displayed in
`
`any of a number of formats selectable by the user: readable on screen as
`
`bibliographical data (authors, title, status, link, abstract); in summary form
`
`(ranking or sorting the results alphabetically by title or author for example); or
`
`displaying the complete information found about each matched document.” (Ex
`
`1007, p. 9.) Knowledge Broker queries the plural unique hosts in real-time in
`
`response to a user query request (which is different than how a Search Engine
`
`works ) as described in Exhibit 1007, p. 7, Section 2.2 “Features of the Constraint-
`
`Based Knowledge Brokers” (see excerpt below):
`
`“The key features on the Constraint-Based Knowledge Brokers are:
`
`…
`
`2. Concurrent Asynchronous Searches – A broker search engine can launch
`
`many concurrent searches that in principle could go on forever, be checked,
`
`refined and re-launched periodically. This is a radically different user
`
`paradigm (and a complementary one) than the “one query at a time” kind of
`
`interaction with which Web search engines support today.
`
`…
`
`4. Knowledge Combination – The broker search engine allows combination
`
`of information returned by different information sources. Thus, this feature
`
`
`
`Page 9
`
`

`

`
`
`introduces a form of “joining,” similar to that provided by database systems
`
`in the context of information gathering from multiple sources in distributed
`
`domains.”
`
`14. Knowledge Broker allows the user to pose both structured and
`
`unstructured (aka free-text) keyword queries. Exhibit 1006, p. 11, describes the
`
`user interface to include:
`
` “default set of structured search attributes depending on the search
`
`domain with supplementary precedence operators.
`
` A “free-text” entry field which allows additional attributes or untyped
`
`keywords as well as combinations of them.”
`
`15.
`
`In summary, Knowledge Broker was a metasearch engine, and the
`
`Knowledge Broker references, Exhibits 1006 and 1007, disclose “metasearching.”
`
`These Knowledge Broker references disclose, in response to receiving a search
`
`query from a user, sending the query in parallel to multiple backend data
`
`repositories, and, in real-time in response to a user’s search query, contacting
`
`external data sources (search engines, databases, etc.) to answer the query. (Ex.
`
`1006, pp. 2, 7, 11-12; Ex. 1007, pp. 7, 10, 12.) These Knowledge Broker
`
`references disclose: sending http requests to the data archives (Ex. 1006, p. 12);
`
`completing an external search (Ex. 1006, p. 12); sending concurrent subrequests
`
`and requests (Ex. 1006, p. 2; Ex. 1007, p. 7); providing concurrency control among
`
`
`
`Page 10
`
`

`

`
`
`related requests (Ex. 1006, p. 7); contacting external search tools to answer a user’s
`
`search query (Ex. 1006, p. 11); sending partial requests in parallel (Ex. 1006, p.
`
`11); and performing some initial checks before an external search is initiated (Ex.
`
`1006, p. 12); receiving 12 search hits collectively from two external data archives
`
`in response to a user query (Ex. 1007, p. 10); generating a search query string for
`
`querying an external data source in response to a user’s search request (Ex. 1007,
`
`p. 12), after which “it then queries the server” (Ex. 1007, p. 12).
`
`16. Knowledge Broker is a metasearch engine using the Board’s
`
`preliminary construction because, as described above in paragraph 14, Knowledge
`
`Broker discloses “an unstructured keyword query or queries,” Knowledge Broker
`
`discloses sending those queries “to plural hosts” (see paragraphs 9-10 above),
`
`Knowledge Broker discloses sending the queries “as requested by a user” (see
`
`paragraphs 8-9 & 14 above), Knowledge Broker discloses “grouping, sorting, and
`
`returning to the user” the results (see paragraph 13 above), and Knowledge Broker
`
`discloses returning to the user “the results received from each host” (see
`
`paragraphs 11-13 above).
`
`17. Knowledge Broker is a metasearch engine using Patent Owner’s
`
`broader Proposed Construction because, as described above in paragraphs 9 and
`
`10, Knowledge Broker discloses “sending at least one search query to plural
`
`
`
`Page 11
`
`

`

`
`
`hosts,” and Knowledge Broker discloses “returning the results received from each
`
`host” (see paragraphs 11-13 above).
`
`18. Dr. Carbonell came to a contrary conclusion. To the extent that Dr.
`
`Carbonell purports to describe the disclosures and teachings of the Knowledge
`
`Broker references, Exhibits 1006 and 1007, to a person of skill in the art in 1999-
`
`2000, I respectfully disagree on a number of levels, and I refute his conclusion for
`
`a number of reasons described in further detail below:
`
`a) Misinterprets use of Harvest System and omits non-Harvest
`
`servers.
`
`b)
`
`No support for statements that Knowledge Broker pre-processes
`
`data and searches internal indexed version of crawled data.
`
`c) Mischaracterizes Knowledge Broker as a Search Engine.
`
`d) Mischaracterizes external archives as web servers, using a
`
`narrow definition of Search Engine in the wrong time frame.
`
`e)
`
`Introduces irrelevant information (e.g. specific data types,
`
`crawling, search engines).
`
`
`
`Misinterprets use of Harvest System and omits non-Harvest servers
`
`19. Dr. Carbonell states in Exhibit 2006, p 14-15:
`
`
`
`Page 12
`
`

`

`
`
`“Knowledge Broker was not a metasearch engine. It was a single
`
`search engine. At information ingestion time, it accessed different data
`
`sources, via its Harvester module, corresponding to a crawler of an
`
`individual search. The Harvester operated much like a standard search
`
`engine’s crawler in accessing different information sources and/or websites
`
`at information ingestion time (Petitioner’s Exhibit 1007, pp. 3-4).”
`
`20.
`
` Dr. Carbonell states that Knowledge Broker had a “Harvester
`
`module,” but this is not supported by the references. Dr. Carbonell cites to pages
`
`3-4 of Exhibit 1007, which state that the “PhysDep” service “supports the use of
`
`the Harvest search engine over a network of about 1000 local web-servers
`
`maintained at different European physics departments. The documents to be
`
`searched only need to be deposited on the department servers which are regularly
`
`searched and indexed by the Harvest system.”
`
`21. Dr. Carbonell’s referenced “Harvester module” is not part of
`
`Knowledge Broker. Rather, it is part of an entirely separate system referred to as
`
`“Harvest search engine” and “Harvest system” (NOT Knowledge Broker). The
`
`references do not state that Knowledge Broker contains a Harvest Module.
`
`PhysDoc and PhysDiss, both external archives searchable via Knowledge Broker,
`
`were merely part of the Harvest system.
`
`
`
`Page 13
`
`

`

`
`
`22. Also, even if Knowledge Broker included a Harvest Module as Dr.
`
`Carbonell contends (which it does not), Dr. Carbonell ignores/omits the clear
`
`disclosure in the Knowledge Broker references of other non-Harvest servers and
`
`other real-time querying of backend external archives:
`
`a)
`
` “Harvest” was one, among other, backend external archives
`
`supported. See Ex. 1007, p. 8, Fig. 1:
`
`b)
`
`Also Exhibit 1007, p. 7, states, “At the backend, the service will
`
`connect, among others, to the Harvest servers within the European Physical
`
`
`
`Society,” see below:
`
`
`
`
`
`Page 14
`
`

`

`
`
`
`
`c)
`
`Also, although some backend external archives searched by
`
`Knowledge Broker supported “harvest” servers, this did not require
`
`Knowledge Broker or the client of the external archive to crawl to access the
`
`data. In fact, Exhibit 1006, p. 12, describes not crawling but rather querying
`
`based on a user request, see below:
`
`d)
`
`This passage describes each of the supported external archives
`
`as including backend interfaces that encapsulate the following: generating a
`
`search request from the user initiated constraint-based query specification,
`
`
`
`Page 15
`
`

`

`
`
`submitting the request, parsing the results, and retranslating them for each
`
`search hit. This describes real-time querying based on a user request rather
`
`than automated crawling.
`
`23. So Dr. Carbonell’s statement that Knowledge Broker accessed data
`
`sources “via its Harvester Module” is not supported by the references, and Dr.
`
`Carbonell’s conclusions that Knowledge Broker was not a metasearch engine
`
`based on this statement ignore clear disclosure in Exhibits 1006 and 1007 that
`
`Knowledge Broker performed real-time querying of other non-Harvest,
`
`heterogeneous sources such as Augsburg and Karlsruhe in addition to real-time
`
`querying of Harvest servers (and querying of Harvest servers did not require
`
`Knowledge Broker to crawl the information sources).
`
`
`
`No support for statement that Knowledge Broker pre-processes data
`
`24. Dr. Carbonell states in Exhibit 2006, p. 15:
`
`“Then, Knowledge Broker pre-processed
`
`that
`
`information,
`
`in part
`
`automatically and in part manually, to formulate and link to the constraints
`
`and relations of the domain (physics articles). When responding to queries,
`
`Knowledge Broker accessed its pre-processed internal version of the
`
`collected information, much like a search engine accesses its internal pre-
`
`processed index of the crawled web-sites at query processing time. The pre-
`
`
`
`Page 16
`
`

`

`
`
`processing includes the establishment and extension of domain-specific
`
`physics “constraints” to relate the various information sources in the domain
`
`to create the broker.”
`
`25. However, there is no mention or use of the term “pre-process” in
`
`either Exhibit 1006 or Exhibit 1007. Dr. Carbonell’s cites for the above point to
`
`the following excerpt from Exhibit 1006, p. 7:
`
`“Using this broker system to exploit a wide range of physics-specific
`
`archives, many of the above mentioned shortcomings can be overcome by
`
`flexibly adding missing functionalities to the individual data access
`
`interfaces. First of all, it is possible to homogenize or even extend the
`
`admissible types of requests with respect to the different data repositories to
`
`be accessed, even when the accessed data repositories do not exactly match
`
`the type of request specification.”
`
`26. But this passage discusses homogenizing “data access interfaces” not
`
`pre-processing data. What this passage is actually describing is creating an
`
`abstraction layer so that different/heterogeneous data repositories can respond to
`
`user queries even when a data repository’s interface specification doesn’t exactly
`
`match the user query request.
`
`27. Thus, the passage in the Knowledge Broker references cited by Dr.
`
`Carbonell does not support his statements. Nor do the Knowledge Broker
`
`
`
`Page 17
`
`

`

`
`
`references otherwise support Dr. Carbonell’s statement that “[w]hen responding to
`
`queries, Knowledge Broker accessed its pre-processed internal version of the
`
`collected information, much like a search engine accesses its internal pre-processed
`
`index of the crawled web-sites at query processing time.” I do not see any
`
`disclosure in Exhibits 1006 and 1007 that indicates Knowledge Broker searched an
`
`internal indexed version of crawled data instead of external data sources when
`
`responding to a user’s search query.
`
`
`
`Mischaracterizes Knowledge Broker as a Search Engine
`
`28.
`
`It’s unnecessary to define “Search Engine” for the purpose of
`
`establishing if “metasearching” is performed because neither the Board’s
`
`preliminary construction nor the Patent Owner’s proposed construction of
`
`“metasearching” requires querying a “Search Engine” but rather simply requires
`
`querying a “host.” However, Dr. Carbonell repeatedly calls Knowledge Broker a
`
`search engine and not a metasearch engine. I believe Dr. Carbonell is defining
`
`search engine to be an application that includes a crawler and pre-processes
`
`crawled information. But there is no use of the term crawl or pre-process in either
`
`Exhibit 1006 or Exhibit 1007 (other than Webcrawlers being external searched
`
`repositories). As explained above, Dr. Carbonell’s statements that Knowledge
`
`Broker included a Harvest Module to crawl information and that Knowledge
`
`
`
`Page 18
`
`

`

`
`
`Broker pre-processed data to create an internal index are not supported by the
`
`references. Dr. Carbonell’s characterization of Knowledge Broker as a search
`
`engine is based on these unsupported statements, and is therefore also unsupported.
`
`Exhibits 1006 and 1007 do not teach a person having ordinary skill in the art in
`
`1999-2000 that Knowledge Broker was a search engine rather than a metasearch
`
`engine. To the contrary, and as stated in further detail above, Exhibit 1006 and
`
`Exhibit 1007 clearly show to a person having ordinary skill in the art in 1999-2000
`
`that Knowledge Broker performed metasearching.
`
`
`
`Mischaracterizes external archives as web servers
`
`29. Dr. Carbonell’s statement that Knowledge Broker was not a
`
`metasearch engine appears to be based on a narrow interpretation of the term
`
`“search engine” today as opposed to the meaning a person having ordinary skill in
`
`the art in 1999-2000 would have given that phrase. In Exhibit 2006, p. 16, Dr.
`
`Carbonell states:
`
`“But Knowledge Broker’s “search engines” are only what we would call
`
`today web servers, repositories of information that must be ingested
`
`(crawled or otherwise downloaded) as done by a simple search engine’s
`
`crawling capability.”
`
`
`
`Page 19
`
`

`

`
`
`30.
`
`I do not agree that a person having ordinary skill in the art in 1999-
`
`2000 would read the Knowledge Broker references and understand “search
`
`engines,” as used in Exhibit 1007, to mean “web servers, repositories of
`
`information that must be ingested (crawled or otherwise downloaded),” nor would
`
`the person having ordinary skill in the art characterize all Knowledge Broker
`
`external archives—which Knowledge Broker self-described as “different search
`
`engines” (Ex. 1007, p 4; see also Ex. 1007, p. 9)—as “web servers.” As described
`
`in the particular implementation above, Knowledge Broker queried and searched in
`
`real-time various heterogeneous sources such as Augsburg and Karlsruhe, as well
`
`as Harvest servers. Knowledge Broker also discloses accessing its external
`
`archives via http protocol, a Z39.50 interface or SQL queries. (Ex. 1006, p. 13.)
`
`Therefore, the person having ordinary skill in the art in 1999-2000 would not have
`
`understood “search engine” as used in Exhibit 1007 to be limited to “web servers”
`
`“today,” but would have instead understood that term to include the various
`
`heterogeneous data repositories and external archives searched by Knowledge
`
`Broker in the implementations described in Exhibits 1006 and 1007.
`
`
`
`Introduces irrelevant information (e.g. specific data types, crawling …)
`
`31. Dr. Carbonell’s conclusions regarding whether or not Knowledge
`
`Broker performs metasearching and/or performs the elements of the challenged
`
`
`
`Page 20
`
`

`

`
`
`claims are based on features not required by the definition of metasearch or by the
`
`challenged claims. (Ex. 2006, pp. 10-13, 20.) For instance, the claims don’t
`
`require retrieving any particular type of data (e.g., the claims don’t require the data
`
`be current and up-to-date, structured, unstructured or semistructured), so the types
`
`of data stored in the repositories is irrelevant. Also, the claims don’t require a
`
`particular type of host, so it’s irrelevant if the data repositories are search engines
`
`or not, or crawl or not, or are called search engines or web servers. What is
`
`relevant with regard to the data repositories and the definition of metasearching is
`
`if there are plural unique hosts, and if plural unique hosts accept either an
`
`unstructured keyword query or any query, and then return results. Knowledge
`
`Broker definitely sends, in response to a user request, search queries to plural
`
`unique hosts (aka plural data repositories) that accept either an unstructured
`
`keyword query or any query, and returns results to the user. Knowledge Broker
`
`therefore performs metasearching and is a Metasearch Engine.
`
`
`
`Mamma.com discloses claim elements (g) and (h)
`
`32. The opinions that follow in this section are in response to Patent
`
`Owner’s contention that Mamma.com does not disclose elements (g) and (h)
`
`(CBM2014-00001 – Patent Owner’s Response to Petition, pp. 54-56). The
`
`
`
`Page 21
`
`

`

`
`
`challenged claims 2, 6, and 8 each contain identical claim elements (g) and (h) as
`
`follows:
`
`(g)
`
`receiving another Hypertext Transfer Protocol request from the
`
`client device for placing an order for the at least one item;
`
`(h) processing the order.
`
`33. Mamma.com provided advertising in the form of Key Word
`
`advertising, Hot Buttons, and Download Now Icons. (Ex. 1005, p. 8.) In addition,
`
`Mamma.com tracked every ad view and ad click. (Ex. 1005, p. 4)
`
`34. Download Now Icons allowed software companies the possibility to
`
`electronically distribute their application. (Ex. 1005, p. 8.) This allowed users the
`
`ability to order the software companies’ software directly from Mamma.com.
`
`35. When a user clicks on a Download Now Icon, the user is ordering the
`
`software company’s application. First, however, the user click sends a Hypertext
`
`
`
`Page 22
`
`
`
`
`
`

`

`
`
`Transfer Protocol Request to Mamma.com, so Mamma.com can track the click and
`
`record the order of the application, disclosing claim limitation (g), and then
`
`subsequently process the order request for the application by redirecting the user to
`
`the software company’s ftp server or download page, disclosing claim limitation
`
`(h).
`
`36. Ordering free software for download as disclosed in Mamma.com was
`
`a well-known business practice in 1999-2000. Product availability, pricing, and
`
`purchasing are not required by the claims, rather the claims only require “placing
`
`an order” and “processing an order.” Mamma.com disclosed “placing an order” by
`
`tracking the advertising click requesting the downloadable software and
`
`“processing an order” by redirecting the client for download of the ordered
`
`software. This is analogous today to when users order free apps to download onto
`
`their phones. There is no question that clicking an icon on a smart phone today to
`
`install a free app such as “Plants vs. Zombies” is an example of “placing an order”
`
`and “processing an order” even though no product availability, pricing, or
`
`purchasing is required.
`
`
`
`Knowledge Broker discloses claim elements (g) and (h)
`
`37. The opinions in this section are in response to Patent Owner’s
`
`contention that Knowledge Broker does not disclose elements (g) and (h).
`
`
`
`Page 23
`
`

`

`
`
`Knowledge Broker discloses “receiving another Hypertext Transfer Protocol
`
`request from the client device for placing an order for the at least one item” and
`
`“processing the order,” where the at least one item is a document for printing-on-
`
`demand on local commercial copy centers (Ex. 1006, p. 4) or a document to be
`
`shipped by mail (Ex. 1007).
`
`38. Further, Knowledge Broker describes a basic architecture that
`
`includes “production management and controlling,” which includes both receiving
`
`a request for an order and processing an order. (Ex. 1006, p. 4.)
`
`
`
`
`
`39. The Knowledge Broker basic architecture also includes “clients and
`
`communication infrastructure,” which includes a communication infrastructure
`
`
`
`Page 24
`
`

`

`
`
`such as Hypertext Transfer Protocol, which is the communications protocol
`
`underlying the World Wide Web. (Ex. 1006, p. 5.)
`
`40. The above infrastructure described in Exhibit 1006 discloses claim
`
`elements (g) and (h).
`
`41. Exhibit 1007 also discloses claim elements (g) and (h) on pages 10-
`
`13.
`
`…
`
`
`
`Page 25
`
`
`
`
`
`

`

`
`
`
`
`42. This passage, and in particular the “Web-form for the order,” clearly
`
`discloses “receiving another Hypertext Transfer Protocol request from the client
`
`device for placing an order for the at least one item.” Also, passing the printing
`
`request to Oldenburg and processing the document to finally send a printed, bound
`
`document by surface mail to the customer clearly discloses “processing the order.”
`
`In addition, Knowledge Broker also discloses processing orders for selected
`
`documents to be printed and delivered, including integrating accounting an

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket