throbber
September 1999
`
`COMPUTING RESEARCH NEWS
`
`Special Insert
`
`Computing Research Association
`
`Best Practices Memo
`Evaluating Computer
`Scientists and Engineers
`For Promotion and Tenure
`
`The evaluation of computer science and engineering faculty
`for promotion and tenure has generally followed the dictate
`“publish or perish,” where “publish” has had its standard aca-
`demic meaning of “publish in archival journals” [Academic
`Careers, 94]. Relying on journal publications as the sole demon-
`stration of scholarly achievement, especially counting such
`publications to determine whether they exceed a prescribed
`threshold, ignores significant evidence of accomplishment in
`computer science and engineering. For example, conference
`publication is preferred in the field, and computational artifacts —
`software, chips, etc. — are a tangible means of conveying ideas
`and insight. Obligating faculty to be evaluated by this traditional
`standard handicaps their careers, and indirectly harms the field.
`This document describes appropriate evidence of academic
`achievement in computer science and engineering.
`
`Computer Science and Engineering —
`Structure of The Field
`
`Computation is synthetic in the sense that many of the phe-
`nomena computer scientists and engineers study are created by
`humans rather than occurring naturally in the physical world.
`As Professor Fred Brooks of the University of North Carolina,
`Chapel Hill observed [Academic Careers, 94, p. 35],
`When one discovers a fact about nature, it is a contribution
`per se, no matter how small. Since anyone can create some-
`thing new [in a synthetic field], that alone does not establish a
`contribution. Rather, one must show that the creation is
`better.
`Accordingly, research in computer science and engineering
`is largely devoted to establishing the “better” property.
`The computer science and engineering field in academe is
`composed of faculty who apply one of two basic research
`paradigms: theory or experimentation. Generalizing, theoreti-
`cians tend to conduct research that resembles mathematics.
`The phenomena are abstract, and the intellectual contribution is
`usually expressed in the form of theorems with proofs. Though
`conference publication is highly regarded in the theoretical
`community, there is a long tradition of completing, revising,
`and extending conference papers for submission and publica-
`tion in archival journals. Accordingly, faculty who pursue
`theoretical work are often more easily evaluated by traditional
`academic mechanisms. Nevertheless, the discussion below
`regarding “impact” will apply to theoretical work, too.
`
`As a second generalization, experimentalists tend to conduct
`research that involves creating computational artifacts and
`assessing them. The ideas are embodied in the artifact, which
`could be a chip, circuit, computer, network, software, robot, etc.
`Artifacts can be compared to lab apparatus in other physical
`sciences or engineering in that they are a medium of experimen-
`tation. Unlike lab apparatus, however, computational artifacts
`embody the idea or concept as well as being a means to measure
`or observe it. Researchers test and measure the performance of
`the artifacts, evaluating their effectiveness at solving the target
`problem. A key research tradition is to share artifacts with
`other researchers to the greatest extent possible. Allowing one’s
`colleagues to examine and use one’s creation is a more intimate
`way of conveying one’s ideas than journal publishing, and is
`seen to be more effective. For experimentalists conference
`publication is preferred to journal publication, and the premier
`conferences are generally more selective than the premier
`journals [Academic Careers, 94]. In these and other ways
`experimental research is at variance with conventional academic
`publication traditions.
`The reason conference publication is preferred to journal
`publication, at least for experimentalists, is the shorter time to
`print (7 months vs 1-2 years), the opportunity to describe the
`work before one’s peers at a public presentation, and the more
`complete level of review (4-5 evaluations per paper compared to
`2-3 for an archival journal) [Academic Careers, 94]. Publication
`in the prestige conferences is inferior to the prestige journals
`only in having significant page limitations and little time to
`polish the paper. In those dimensions that count most, confer-
`ences are superior.
`
`Impact — The Criterion for Success
`
`Brooks noted that researchers in a synthetic field must estab-
`lish that their creation is better. “Better” can mean many things
`including “solves a problem in less time,” “solves a larger class of
`problems,” “is more efficient of resources,” “is more expressive
`by some criterion,” “is more visually appealing in the case of
`graphics,” “presents a totally new capability,” etc. A key point
`about this type of research is that the “better” property is not
`simply an observation. Rather, the research will postulate that a
`new idea — a mechanism, process, algorithm, representation,
`protocol, data structure, methodology, language, optimization or
`simplification, model, etc. — will lead to a “better” result. For
`researchers in the field, making the connection between the idea
`and the improvement is as important as quantifying how much
`the improvement is. The contribution is the idea, and is generally
`a component of a larger computational system.
`The fundamental basis for academic achievement is the impact
`of one’s ideas and scholarship on the field. What group is af-
`fected and the form of the impact can vary considerably. Often
`the beneficiaries of research are other researchers. The contribu-
`tion may be used directly or be the foundation for some other
`artifact, it may change how others conduct their research, it may
`affect the questions they ask or the topics they choose to study,
`etc. It may even indicate the impossibility of certain goals and kill
`off lines of research. Clearly, it is not so much the number of
`researchers that are affected as it is how fundamentally it influ-
`ences their work. Users are another group that might feel the
`impact of research.
`For the purposes of evaluating a faculty member for promo-
`tion or tenure, there are two critical objectives of an evaluation:
`
`Page A
`
`www.cra.org
`
`Ingenico Inc. v. IOENGINE, LLC
`IPR2019-00879 (US 9,059,969)
`Exhibit 2138
`
`

`

`COMPUTING RESEARCH NEWS
`
`September 1999
`
`Special Insert
`
`(a) Establish a connection between a faculty member’s intel-
`lectual contribution and the benefits claimed for it, and
`(b) Determine the magnitude and significance of the impact.
`Both aspects can be documented, but it is more complicated
`than simply counting archival publications.
`
`Assessing Impact
`
`Standard publication seeks to validate the two objectives
`indirectly, arguing that the editor and reviewers of the publica-
`tion must be satisfied that the claims of novelty and ownership
`are true, and that the significance is high enough to meet the
`journal’s standards. There is obvious justification for this view,
`and so standard publication is an acceptable, albeit indirect, means
`of assessing impact. But it can be challenged on two counts.
`First, the same rationale can be applied to conference proceedings
`provided they are as carefully reviewed as the prestige confer-
`ences are in the computer science and engineering field. Second
`the measure of the impact is embodied in the quality of the
`publication, i.e. if the publication’s standards are high then the
`significance is presumed to be high. Not all papers in high
`quality publications are of great significance, and high quality
`papers can appear in lower quality venues. Publication’s indi-
`rect approach to assessing impact implies that it is useful, but
`not definitive.
`The primary direct means of assessing impact — to document
`items (a) and (b) above — is by letters of evaluation from peers.
`Peers understand the contribution as well as its significance.
`Though some institutions demand that peer letter writers be
`selected to maximize the peer’s stature in the field, e.g. member-
`ship in the National Academy, a more rational basis should be
`used.
`From the point of view of documenting item (a), the connec-
`tion between the faculty member’s contribution and its effects,
`evaluators may be selected from the faculty member’s collabora-
`tors, competitors, industrial colleagues, users, etc. so that they
`will have the sharpest knowledge about the contribution and its
`impact. If an artifact is involved, it is expected that the letter
`writers are familiar with it, as well as with the candidate’s publi-
`cation record. These writers may be biased, of course, but this is
`a cost of collecting primary data. The promotion and tenure
`committee will have to take bias into consideration, perhaps
`seeking additional advice.
`The letter writers need to be familiar with the artifact as well
`as the publications. The artifact is a self-describing embodiment
`of the ideas. Though publications are necessary for the obvious
`reasons — highlighting the contribution, relating the ideas to
`previous work, presenting measurements and experimental
`results, etc. — the artifact encapsulates information that cannot
`be captured on paper. Most artifacts “run,” allowing evaluators to
`acquire dynamic information. Further, most artifacts are so
`complex that it is impossible to explain all of their characteristics;
`it is better to observe them. Artifacts, being essential to the
`research enterprise, are essential to its evaluation, too.
`Some schools prohibit letters of evaluation from writers not
`having an academic affiliation. This can be a serious handicap to
`experimental computer scientists and engineers because some of
`the field’s best researchers work at industrial research labs and
`occasionally advanced development centers. Academic-industry
`collaborations occur regularly based on common interests and the
`advantage that a company’s resources can bring to the implemen-
`tation of a complex artifact. Letters from these researchers are no
`
`less informed, thoughtful, or insightful because the writer’s
`return address is a company.
`In terms of assessing item (b) the significance of impact, the
`letter writers will generally address its significance, but quanti-
`tative data will often be offered as well. Examples include the
`number of downloads of a (software) artifact, number of users,
`number of hits on a Web page, etc. Such measures can be
`sound indicators of significance and influence, especially if they
`indicate that peers use the research, but popularity is not
`equivalent to impact.
`Specifically, it is possible to write a valuable, widely used
`piece of software inducing a large number of downloads and not
`make any academically significant contribution. Developers at
`IBM, Microsoft, Sun, etc. do this every day. In such cases the
`software is literally new, as might be expected in a synthetic field,
`but it has been created within the known state-of-the-art. It is
`not “better” by embodying new ideas or techniques, as Brooks
`requires. It may be improved, but anyone “schooled in the art”
`would achieve similar results.
`Quantitative data may not imply all that is claimed for it, and
`it can be manipulated. Downloads do not imply that the soft-
`ware is actually being used, nor do Web hits imply interest.
`There are techniques, such as the Googol page-rank approach
`[http://www.google.com], that may produce objective informa-
`tion about Web usage, for example, but caution in using numbers
`is always advised.
`
`Summary
`
`Computer science and engineering is a synthetic field in
`which creating something new is only part of the problem; the
`creation must also be shown to be “better.” Though standard
`publication is one indicator of academic achievement, other
`forms of publication, specifically conference publication, and
`the dissemination of artifacts also transmit ideas. Conference
`publication is both rigorous and prestigious. Assessing artifacts
`requires evaluation from knowledgeable peers. Quantitative
`measures of impact are possible, but they may not tell the
`implied story.
`
`References
`
`Academic Careers for Experimental Computer Scientists and
`Engineers, 1994, National Academy Press
`
`Googol Page Rank System
`
`Approved by the
`Computing Research Association
`Board of Directors
`August 1999
`
`Prepared by:
`David Patterson (University of California,Berkeley)
`Lawrence Snyder (University of Washington)
`Jeffrey Ullman (Stanford University)
`
`Page B
`
`www.cra.org
`
`Ingenico Inc. v. IOENGINE, LLC
`IPR2019-00879 (US 9,059,969)
`Exhibit 2138
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket