Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

-update #1722

Merged
merged 7 commits into from
Mar 2, 2022
Merged

-update #1722

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .mailmap

This file was deleted.

14 changes: 5 additions & 9 deletions gatsby-node.js
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
const path = require('path');

exports.onCreateWebpackConfig = ({ actions }) => {
actions.setWebpackConfig({
resolve: {
modules: [path.resolve(__dirname, 'src'), 'node_modules'],
},
});
};
/**
* Implement Gatsby's Node APIs in this file.
*
* See: https://www.gatsbyjs.org/docs/node-apis/
*/
14 changes: 7 additions & 7 deletions notes/BGOONZ_BLOG_2.0.wiki/anatomy-of-search-engine.md.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,15 @@ Computer Science Department, Stanford University, Stanford, CA 94305

### Abstract

> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/)
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/)
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
>
> **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google

## 1\. Introduction

_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_
_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_
The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as [Yahoo!](http://www.yahoo.com/) or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines.

### 1.1 Web Search Engines -- Scaling Up: 1994 - 2000
Expand Down Expand Up @@ -114,7 +114,7 @@ Another big difference between the web and traditional well controlled collectio

First, we will provide a high level discussion of the architecture. Then, there is some in-depth descriptions of important data structures. Finally, the major applications: crawling, indexing, and searching will be examined in depth.

![](http://infolab.stanford.edu/~backrub/over.gif)
![]()

Figure 1. High Level Google Architecture

Expand Down Expand Up @@ -250,12 +250,12 @@ Lexicon

293 MB

Temporary Anchor Data
Temporary Anchor Data
(not in total)

6.6 GB

Document Index Incl.
Document Index Incl.
Variable Width Data

9.7 GB
Expand Down Expand Up @@ -412,7 +412,7 @@ Scott Hassan and Alan Steremberg have been critical to the development of Google

## Vitae

![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg)
![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg)
**Sergey Brin** received his B.S. degree in mathematics and computer science from the University of Maryland at College Park in 1993. Currently, he is a Ph.D. candidate in computer science at Stanford University where he received his M.S. in 1995. He is a recipient of a National Science Foundation Graduate Fellowship. His research interests include search engines, information extraction from unstructured sources, and data mining of large text collections and scientific data.

**Lawrence Page** was born in East Lansing, Michigan, and received a B.S.E. in Computer Engineering at the University of Michigan Ann Arbor in 1995. He is currently a Ph.D. candidate in Computer Science at Stanford University. Some of his research interests include the link structure of the web, human computer interaction, search engines, scalability of information access interfaces, and personal data mining.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,15 @@ Computer Science Department, Stanford University, Stanford, CA 94305

### Abstract

> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/)
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/)
> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date.
> Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
>
> **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google

## 1\. Introduction

_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_
_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_
The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as [Yahoo!](http://www.yahoo.com/) or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines.

### 1.1 Web Search Engines -- Scaling Up: 1994 - 2000
Expand Down Expand Up @@ -114,7 +114,7 @@ Another big difference between the web and traditional well controlled collectio

First, we will provide a high level discussion of the architecture. Then, there is some in-depth descriptions of important data structures. Finally, the major applications: crawling, indexing, and searching will be examined in depth.

![](http://infolab.stanford.edu/~backrub/over.gif)
![]()

Figure 1. High Level Google Architecture

Expand Down Expand Up @@ -250,12 +250,12 @@ Lexicon

293 MB

Temporary Anchor Data
Temporary Anchor Data
(not in total)

6.6 GB

Document Index Incl.
Document Index Incl.
Variable Width Data

9.7 GB
Expand Down Expand Up @@ -412,7 +412,7 @@ Scott Hassan and Alan Steremberg have been critical to the development of Google

## Vitae

![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg)
![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg)
**Sergey Brin** received his B.S. degree in mathematics and computer science from the University of Maryland at College Park in 1993. Currently, he is a Ph.D. candidate in computer science at Stanford University where he received his M.S. in 1995. He is a recipient of a National Science Foundation Graduate Fellowship. His research interests include search engines, information extraction from unstructured sources, and data mining of large text collections and scientific data.

**Lawrence Page** was born in East Lansing, Michigan, and received a B.S.E. in Computer Engineering at the University of Michigan Ann Arbor in 1995. He is currently a Ph.D. candidate in Computer Science at Stanford University. Some of his research interests include the link structure of the web, human computer interaction, search engines, scalability of information access interfaces, and personal data mining.
Expand Down
78 changes: 40 additions & 38 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,40 +1,42 @@
{
"name": "stackbit-libris-theme",
"description": "Stackbit Libris Theme",
"version": "0.0.1",
"license": "MIT",
"scripts": {
"develop": "gatsby develop",
"start": "npm run develop",
"build": "gatsby build --prefix-paths",
"serve": "gatsby serve"
},
"dependencies": {
"@stackbit/gatsby-plugin-menus": "0.0.4",
"babel-runtime": "6.26.0",
"chokidar": "3.4.0",
"classnames": "2.2.6",
"fs-extra": "7.0.1",
"gatsby": "2.25.4",
"gatsby-plugin-sass": "2.8.0",
"gatsby-plugin-react-helmet": "3.3.3",
"gatsby-plugin-typescript": "2.4.4",
"gatsby-source-filesystem": "2.3.10",
"gatsby-transformer-remark": "2.8.14",
"gatsby-plugin-disqus": "^1.2.3",
"js-yaml": "3.12.2",
"lodash": "4.17.11",
"marked": "0.6.1",
"moment": "2.23.0",
"moment-strftime": "0.5.0",
"node-sass": "4.14.0",
"node-sass-utils": "1.1.2",
"react": "16.5.1",
"react-dom": "16.13.1",
"react-helmet": "5.2.1",
"react-html-parser": "2.0.2",
"react-script-tag": "1.1.2",
"rehype-react": "3.0.2",
"sprintf-js": "1.1.2"
}
"name": "stackbit-libris-theme",
"description": "Stackbit Libris Theme",
"version": "0.0.1",
"license": "MIT",
"scripts": {
"develop": "gatsby develop",
"start": "npm run develop",
"build": "gatsby build --prefix-paths",
"serve": "gatsby serve"
},
"dependencies": {
"@stackbit/gatsby-plugin-menus": "0.0.4",
"@stackbit/stackbit-medium-importer": "^0.2.0",
"babel-runtime": "6.26.0",
"chokidar": "3.4.0",
"classnames": "2.2.6",
"fs-extra": "7.0.1",
"axios": "^0.16.2",
"gatsby": "2.25.4",
"gatsby-plugin-disqus": "^1.2.3",
"gatsby-plugin-react-helmet": "3.3.3",
"gatsby-plugin-sass": "2.8.0",
"gatsby-plugin-typescript": "2.4.4",
"gatsby-source-filesystem": "2.3.10",
"gatsby-transformer-remark": "2.8.14",
"js-yaml": "3.12.2",
"lodash": "4.17.11",
"marked": "0.6.1",
"moment": "2.23.0",
"moment-strftime": "0.5.0",
"node-sass": "4.14.0",
"node-sass-utils": "1.1.2",
"react": "16.5.1",
"react-dom": "16.13.1",
"react-helmet": "5.2.1",
"react-html-parser": "2.0.2",
"react-script-tag": "1.1.2",
"rehype-react": "3.0.2",
"sprintf-js": "1.1.2"
}
}
Loading