谁能给我几段关于科技的英语文章

来源:学生作业帮助网 编辑:作业帮 时间:2024/04/27 11:13:48
谁能给我几段关于科技的英语文章

谁能给我几段关于科技的英语文章
谁能给我几段关于科技的英语文章

谁能给我几段关于科技的英语文章
New technology may help scientists decode dolphin communication.Florida-based dolphin researcher Jack Kassewitz and British acoustic engineer John Stuart Reid are analyzing dolphin sounds with an instrument called a CymaScope.The scope has a thin membrane that vibrates as sound passes through it.Then it converts those vibrations into a 3-D,high-definition image.Kassewitz hopes the unprecedented detail will solve some mysteries.
Pamela Martin is a geophysicist at the University of Chicago.She co-authored a 2009 study analyzing the environmental impact of an American diet based on meat versus a diet based on vegetables.
Pamela Martin:Right now the current mean American diet has 28% meat,but there’s nothing to say that if we all cut back to 10% that we would suffer nutritionally.And yet the environment would benefit quite a bit from that.
Martin’s study examined how much land a meat-based diet versus a vegetable-based diet would require to support Americans.
Pamela Martin:You need about four and a half times the amount of land to grow feed that you feed for cattle,versus using that land directly to grow food that we would directly consume.
Ultimately,Martin said,raising livestock requires more fertilizer and emits more greenhouse gases.These have environmental impacts.

http://www.scienceupdate.com/
http://www.earthsky.org/

科技小报 technology newspaper
是不是这个阿?↑↑ ......
1. It leaves the complication of life and living objects to biology, and is only too happy to yield to chemistry the exploration of the myria...

全部展开

科技小报 technology newspaper
是不是这个阿?↑↑ ......
1. It leaves the complication of life and living objects to biology, and is only too happy to yield to chemistry the exploration of the myriad ways atoms interact with one another.
物理学把生命的复杂和活的事物留给了生物学,又十分恰当的把研究微粒间许多相互作用的规律留给了化学。
2. Surely objects cut into such shapes must have especially significant place in a subject profession to deal with simple things.
当然物体切割成这种形状是因为它在同类中相对于那些简单的事物有其特别重要的地位。
3. It looks the same from all directions and it can be handled, thrown, swung or rolled to investigate all the laws of mechanics.
它从各个方位看都是一样的,它也能从触摸、投掷、摇摆、滚动等方面来研究所有的力学规律。
4. That being so, we idealize the surface away by pure imagination – infinitely sharp, perfectly smooth, absolutely featureless.
那之所以会这样,是因为我们把它的表面完全凭想象将它理想化了——绝对清晰、完全光滑、绝对无个性。
5. All we can hope to do is classify into groups and study behavior which we believed to be common to all members of the groups, and this means abstracting the general from the particular.
所有我们所希望做的是将事物分成组,然后研究该组成员中所有事物的共性,也就是说从个性中选出抽象的共性。
6. Although one may point to the enormous importance of the arrangement rather than the chemical nature of atoms in a crystal with regard to its properties, and quote with glee the case of carbon atoms which from the hardest substance known when ordered in a diamond lattice, and one of the softest, as its use in pencils testifies, when ordered in a graphite lattice (figure 2), it is obviously essential that individual atomic characteristics be ultimately built into whatever model is developed.
Words
1. polygon 多边形 polyhedron多面体
2. tetragon 四边形 tetrahedron 四面体
3. pentagon 五边形 pentahedron 五面体
4. hexagon 六边形 hexahedron 六面体
5. heptagon 七边形 heptahedron 七面体
6. octagon 八边形 octahedron 八面体
7. enneagon 九边形 enneahedron 九面体
8. decagon 十边形 decahedron 十面体
9. dodecagon十二边形 dodecahedron 十二面体
10. icosagon 二十边形 icosahedron 二十面体
0ne sometimes hears the Internet characterized as the world's library for the digital age. This description does not stand up under even casual examination. The Internet-and particularly its collection of multimedia resources known as the World Wide Web- was not designed to support the organized publication and retrieval of information, as libraries are. It has evolved into what might be thought of as a chaotic repository for the collective output of the world's digital "printing presses." This storehouse of information contains not only books and papers but raw scientific data, menus, meeting minutes,advertisement, video and audio recordings, and transcripts of interactive conversations. The ephemeral mixes everywhere with works of lasting importance.
In short,the Net is not a digital library. But if it is to continue to grow and thrive as a new means of communication, something very much like traditional library services will be needed to organize, access and preserve networked information. Even then, the Net will not resemble a traditional library, because its contents are more widely dispersed than a standard collection. Consequently, the librarian's classification and selection skills must be complemented by the computer scientist's ability to automate the task of indexing and storing information. Only a synthesis of the differing perspectives brought by both professions will allow this new medium to remain viable.
At the moment, computer technology bears most of the responsibility for organizing information on the Internet. In theory,software that classifies and indexes collections of digital data can address the glut of information on the Net-and the inability of human indexers bibliographers to cope with it. Automating information access has the advantage of directly exploiting the rapidly dropping costs of computers and avoiding the expense and delays of human indexing.
But, as anyone who has ever sought information on the Web knows, these automated tools Categorize information differently than people do. In one sense, the job performed by the various indexing and cataloguing tools known as search engines is highly democratic. Machine-based approaches provide uniform and equal access to all the information on the Net. In practice, this electronic egalitarianism can prove a mixed blessing. Web "surfers" who type in a search request are often overwhelmed by thousands of responses. The search results frequently contain references to irrelevant Web sites while leaving out others that hold important material.
Crawling the Web
The nature of electronic indexing can be understood by examining the way Web search engines, such as Lycos or Digital Equipment Corporation's Alra Vista, construct indexes and find information requested by a user. Periodically,they dispatch programs (sometimes referred to as Web crawlers, spiders or indexing robots) to every site they can identify on the Web—each site being a set of documents, called pages, that can be accessed over the network. The Web crawlers download and then examine these pages and extract indexing information that can be used to describe them. This process---details of which vary among search engines-may include simply locating most of the words that appear in Web pages or performing sophisticated analyses to identify key words and phrases. These data are then stored in the search engine's database, along with an address, termed a uniform resource locator (URL) , that represents where the file resides. A user then deploys a browser, such as the familiar Netscape, to submit queries to the search engine's database. The query produces a list of Web resources, the URLs that can be clicked to connect to the sites identified by the search.
Existing search engines service millions of queries a day. Yet it has become clear that they are less than ideal for retrieving an ever growing body of information on the Web. In contrast to human indexers, automated programs have difficulty identifying characteristics of a document such as its overall theme or its genre-whether it is a poem or a play, or even an advertisement.
The Web, moreover, still lacks standards that would facilitate automated indexing. As a result, documents on the Web are not structured so that programs can reliably extract the routine information that a human indexer might find through a cursory inspection: author, dare of publication, length of text and subject matter. (This information is known as metadata.) A Web crawler might turn up the desired article authored by Jane Doe. But it might also find thousands of other articles in which such a common name is mentioned in the text or in a bibliographic reference.
Publishers sometimes abuse the indiscriminate character of automated indexing. A Web site can bias the selection process to attract attention to 'itself by 'repeating within a document a word, such as "sex," that is known to be queried often. The reason: a search engine will display first the URLs for the documents that mention a search term most frequently. In contrast, humans can easily, see around simpleminded tricks.
The professional indexer can describe the components of individual pages of all sorts (from text to video) and can clarify how those parts fit together into a database of information. Civil War photographs, for example, might form part of a collection that also includes period music and soldier diaries. A human indexer can describe a site's rules for the collection and retention of programs in, say, an archive that stores Macintosh software. Analyses of a site's purpose, history and policies are beyond the capabilities of a crawler program.
Another drawback of automated indexing is that most search engines recognize text only. The intense interest in the Web, though, has come about because of the medium's ability to display images, whether graphics or video clips. Some research has moved forward toward finding color, or patterns within images [see box on next two pages]. But no program can deduce the underlying meaning and cultural significance of an image (for example, that a group of men dining represents the Last Supper).
At the same time, the way information is structured on the Web is changing so that it often can not be examined by Web crawlers. Many Web pages are no longer static files that can be analyzed and indexed by such programs. In many cases, the information displayed in a document is computed by the Web site during a search in response to the user's request. The site might assemble a map, and a text document from different areas of its database, a disparate collection of information that conforms the user's query. A newspaper Web site, for instance, might allow a reader to specify, that only stories on the oil-equipment business be displayed in a personalized version of the paper. The database of stories from which this document is put together could not be searched by a Web crawler that visits the site.
A growing body of research has attempted to address some of the problems involved with automated classification methods. One approach seeks to attach metadata to files so that indexing systems can collect this information. The most advanced effort is the Dublin Core Metadata program and an affiliated endeavor the Warwick Framework the first named after a workshop in Dublin Ohio, the other for a colloquy in Warwick, England. The workshops have defined a set of metadata elements that are simpler than those in traditional library cataloguing and have also created methods for incorporating them within pages on the Web.
Categorization of metadata might range from title or author to type of document (text or video, for instance). Either automated indexing software or humans may, derive the metadata, which can then be attached to a Web page for retrieval by a crawler. Precise and de tailed human annotations can provide a more in-depth characterization of a page than can an automated indexing program alone.
Where costs can be justified, human indexers have begun the laborious task of compiling bibliographies of some Web sites. The Yahoo database, a commercial venture, classifies sites by broad subject area. And a research project at the University of Michigan is one of……



In case where information is furnished without charge or is advertiser supported, low-cost computer-based indexing will most likely dominate—the same unstructured environment that characterizes much of contemporary Internet.

收起