diff --git a/.mailmap b/.mailmap deleted file mode 100644 index 0767f3a77a..0000000000 --- a/.mailmap +++ /dev/null @@ -1 +0,0 @@ -Bryan Guner \ No newline at end of file diff --git a/gatsby-node.js b/gatsby-node.js index 9e57598d59..a1bfac02e7 100644 --- a/gatsby-node.js +++ b/gatsby-node.js @@ -1,9 +1,5 @@ -const path = require('path'); - -exports.onCreateWebpackConfig = ({ actions }) => { - actions.setWebpackConfig({ - resolve: { - modules: [path.resolve(__dirname, 'src'), 'node_modules'], - }, - }); -}; +/** + * Implement Gatsby's Node APIs in this file. + * + * See: https://www.gatsbyjs.org/docs/node-apis/ + */ diff --git a/notes/BGOONZ_BLOG_2.0.wiki/anatomy-of-search-engine.md.md b/notes/BGOONZ_BLOG_2.0.wiki/anatomy-of-search-engine.md.md index 86538eaa63..215becb572 100644 --- a/notes/BGOONZ_BLOG_2.0.wiki/anatomy-of-search-engine.md.md +++ b/notes/BGOONZ_BLOG_2.0.wiki/anatomy-of-search-engine.md.md @@ -25,15 +25,15 @@ Computer Science Department, Stanford University, Stanford, CA 94305 ### Abstract -> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/) -> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. +> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/) +> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. > Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want. > > **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google ## 1\. Introduction -_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_ +_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_ The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as [Yahoo!](http://www.yahoo.com/) or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines. ### 1.1 Web Search Engines -- Scaling Up: 1994 - 2000 @@ -114,7 +114,7 @@ Another big difference between the web and traditional well controlled collectio First, we will provide a high level discussion of the architecture. Then, there is some in-depth descriptions of important data structures. Finally, the major applications: crawling, indexing, and searching will be examined in depth. -![](http://infolab.stanford.edu/~backrub/over.gif) +![]() Figure 1. High Level Google Architecture @@ -250,12 +250,12 @@ Lexicon 293 MB -Temporary Anchor Data +Temporary Anchor Data (not in total) 6.6 GB -Document Index Incl. +Document Index Incl. Variable Width Data 9.7 GB @@ -412,7 +412,7 @@ Scott Hassan and Alan Steremberg have been critical to the development of Google ## Vitae -![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg) +![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg) **Sergey Brin** received his B.S. degree in mathematics and computer science from the University of Maryland at College Park in 1993. Currently, he is a Ph.D. candidate in computer science at Stanford University where he received his M.S. in 1995. He is a recipient of a National Science Foundation Graduate Fellowship. His research interests include search engines, information extraction from unstructured sources, and data mining of large text collections and scientific data. **Lawrence Page** was born in East Lansing, Michigan, and received a B.S.E. in Computer Engineering at the University of Michigan Ann Arbor in 1995. He is currently a Ph.D. candidate in Computer Science at Stanford University. Some of his research interests include the link structure of the web, human computer interaction, search engines, scalability of information access interfaces, and personal data mining. diff --git a/notes/BGOONZ_BLOG_2.0.wiki/articles/anatomy-of-search-engine.md.md b/notes/BGOONZ_BLOG_2.0.wiki/articles/anatomy-of-search-engine.md.md index 86538eaa63..215becb572 100644 --- a/notes/BGOONZ_BLOG_2.0.wiki/articles/anatomy-of-search-engine.md.md +++ b/notes/BGOONZ_BLOG_2.0.wiki/articles/anatomy-of-search-engine.md.md @@ -25,15 +25,15 @@ Computer Science Department, Stanford University, Stanford, CA 94305 ### Abstract -> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/) -> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. +> In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/) +> To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. > Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want. > > **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google ## 1\. Introduction -_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_ +_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_ The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as [Yahoo!](http://www.yahoo.com/) or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines. ### 1.1 Web Search Engines -- Scaling Up: 1994 - 2000 @@ -114,7 +114,7 @@ Another big difference between the web and traditional well controlled collectio First, we will provide a high level discussion of the architecture. Then, there is some in-depth descriptions of important data structures. Finally, the major applications: crawling, indexing, and searching will be examined in depth. -![](http://infolab.stanford.edu/~backrub/over.gif) +![]() Figure 1. High Level Google Architecture @@ -250,12 +250,12 @@ Lexicon 293 MB -Temporary Anchor Data +Temporary Anchor Data (not in total) 6.6 GB -Document Index Incl. +Document Index Incl. Variable Width Data 9.7 GB @@ -412,7 +412,7 @@ Scott Hassan and Alan Steremberg have been critical to the development of Google ## Vitae -![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg) +![](http://infolab.stanford.edu/~backrub/sergey.jpg)![](http://infolab.stanford.edu/~backrub/larry.jpg) **Sergey Brin** received his B.S. degree in mathematics and computer science from the University of Maryland at College Park in 1993. Currently, he is a Ph.D. candidate in computer science at Stanford University where he received his M.S. in 1995. He is a recipient of a National Science Foundation Graduate Fellowship. His research interests include search engines, information extraction from unstructured sources, and data mining of large text collections and scientific data. **Lawrence Page** was born in East Lansing, Michigan, and received a B.S.E. in Computer Engineering at the University of Michigan Ann Arbor in 1995. He is currently a Ph.D. candidate in Computer Science at Stanford University. Some of his research interests include the link structure of the web, human computer interaction, search engines, scalability of information access interfaces, and personal data mining. diff --git a/package.json b/package.json index 2784ed084f..965881b405 100644 --- a/package.json +++ b/package.json @@ -1,40 +1,42 @@ { - "name": "stackbit-libris-theme", - "description": "Stackbit Libris Theme", - "version": "0.0.1", - "license": "MIT", - "scripts": { - "develop": "gatsby develop", - "start": "npm run develop", - "build": "gatsby build --prefix-paths", - "serve": "gatsby serve" - }, - "dependencies": { - "@stackbit/gatsby-plugin-menus": "0.0.4", - "babel-runtime": "6.26.0", - "chokidar": "3.4.0", - "classnames": "2.2.6", - "fs-extra": "7.0.1", - "gatsby": "2.25.4", - "gatsby-plugin-sass": "2.8.0", - "gatsby-plugin-react-helmet": "3.3.3", - "gatsby-plugin-typescript": "2.4.4", - "gatsby-source-filesystem": "2.3.10", - "gatsby-transformer-remark": "2.8.14", - "gatsby-plugin-disqus": "^1.2.3", - "js-yaml": "3.12.2", - "lodash": "4.17.11", - "marked": "0.6.1", - "moment": "2.23.0", - "moment-strftime": "0.5.0", - "node-sass": "4.14.0", - "node-sass-utils": "1.1.2", - "react": "16.5.1", - "react-dom": "16.13.1", - "react-helmet": "5.2.1", - "react-html-parser": "2.0.2", - "react-script-tag": "1.1.2", - "rehype-react": "3.0.2", - "sprintf-js": "1.1.2" - } + "name": "stackbit-libris-theme", + "description": "Stackbit Libris Theme", + "version": "0.0.1", + "license": "MIT", + "scripts": { + "develop": "gatsby develop", + "start": "npm run develop", + "build": "gatsby build --prefix-paths", + "serve": "gatsby serve" + }, + "dependencies": { + "@stackbit/gatsby-plugin-menus": "0.0.4", + "@stackbit/stackbit-medium-importer": "^0.2.0", + "babel-runtime": "6.26.0", + "chokidar": "3.4.0", + "classnames": "2.2.6", + "fs-extra": "7.0.1", + "axios": "^0.16.2", + "gatsby": "2.25.4", + "gatsby-plugin-disqus": "^1.2.3", + "gatsby-plugin-react-helmet": "3.3.3", + "gatsby-plugin-sass": "2.8.0", + "gatsby-plugin-typescript": "2.4.4", + "gatsby-source-filesystem": "2.3.10", + "gatsby-transformer-remark": "2.8.14", + "js-yaml": "3.12.2", + "lodash": "4.17.11", + "marked": "0.6.1", + "moment": "2.23.0", + "moment-strftime": "0.5.0", + "node-sass": "4.14.0", + "node-sass-utils": "1.1.2", + "react": "16.5.1", + "react-dom": "16.13.1", + "react-helmet": "5.2.1", + "react-html-parser": "2.0.2", + "react-script-tag": "1.1.2", + "rehype-react": "3.0.2", + "sprintf-js": "1.1.2" + } } diff --git a/src/pages/docs/articles/how-search-engines-work.md b/src/pages/docs/articles/how-search-engines-work.md index c0271f24de..5a10bc974b 100644 --- a/src/pages/docs/articles/how-search-engines-work.md +++ b/src/pages/docs/articles/how-search-engines-work.md @@ -48,15 +48,15 @@ Computer Science Department, Stanford University, Stanford, CA 94305 ### Abstract ->        In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/) ->        To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. +>        In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at [http://google.stanford.edu/](http://google.stanford.edu/) +>        To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. >        Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want. -> +> > **Keywords**: World Wide Web, Search Engines, Information Retrieval, PageRank, Google ## 1\. Introduction -_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_ +_(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.)_ The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as [Yahoo!](http://www.yahoo.com/) or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines. ### 1.1 Web Search Engines -- Scaling Up: 1994 - 2000 @@ -96,9 +96,9 @@ The citation (link) graph of the web is an important resource that has largely g Academic citation literature has been applied to the web, largely by counting citations or backlinks to a given page. This gives some approximation of a page's importance or quality. PageRank extends this idea by not counting links from all pages equally, and by normalizing by the number of links on a page. PageRank is defined as follows: > _We assume page A has pages T1...Tn which point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. We usually set d to 0.85. There are more details about d in the next section. Also C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows:_ -> +> > _PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))_ -> +> > _Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one._ PageRank or _PR(A)_ can be calculated using a simple iterative algorithm, and corresponds to the principal eigenvector of the normalized link matrix of the web. Also, a PageRank for 26 million web pages can be computed in a few hours on a medium size workstation. There are many other details which are beyond the scope of this paper. @@ -137,11 +137,11 @@ Another big difference between the web and traditional well controlled collectio First, we will provide a high level discussion of the architecture. Then, there is some in-depth descriptions of important data structures. Finally, the major applications: crawling, indexing, and searching will be examined in depth. -![](http://infolab.stanford.edu/~backrub/over.gif) +![]() Figure 1. High Level Google Architecture -   + ### 4.1 Google Architecture Overview @@ -163,7 +163,7 @@ BigFiles are virtual files spanning multiple file systems and are addressable by #### 4.2.2 Repository -   + ![](http://infolab.stanford.edu/~backrub/repos.gif) @@ -191,7 +191,7 @@ Our compact encoding uses two bytes for every hit. There are two types of hits: Figure 3. Forward and Reverse Indexes and the Lexicon -  + The length of a hit list is stored before the hits themselves. To save space, the length of the hit list is combined with the wordID in the forward index and the docID in the inverted index. This limits it to 8 and 5 bits respectively (there are some tricks which allow 8 bits to be borrowed from the wordID). If the length is longer than would fit in that many bits, an escape code is used in those bits, and the next two bytes contain the actual length. @@ -230,12 +230,12 @@ The goal of searching is to provide quality search results efficiently. Many of 5. Compute the rank of that document for the query. 6. If we are in the short barrels and at the end of any doclist, seek to the start of the doclist in the full barrel for every word and go to step 4. 7. If we are not at the end of any doclist go to step 4. - + Sort the documents that have matched by rank and return the top k. Figure 4. Google Query Evaluation -  + To put a limit on response time, once a certain number (currently 40,000) of matching documents are found, the searcher automatically goes to step 8 in Figure 4. This means that it is possible that sub-optimal results would be returned. We are currently investigating other ways to solve this problem. In the past, we sorted the hits according to PageRank, which seemed to improve the situation. @@ -257,8 +257,8 @@ All of the results are reasonably high quality pages and, at last check, none we ### 5.1 Storage Requirements -Aside from search quality, Google is designed to scale cost effectively to the size of the Web as it grows. One aspect of this is to use storage efficiently. Table 1 has a breakdown of some statistics and storage requirements of Google. Due to compression the total size of the repository is about 53 GB, just over one third of the total data it stores. At current disk prices this makes the repository a relatively cheap source of useful data. More importantly, the total of all the data used by the search engine requires a comparable amount of storage, about 55 GB. Furthermore, most queries can be answered using just the short inverted index. With better encoding and compression of the Document Index, a high quality web search engine may fit onto a 7GB drive of a new PC. -   +Aside from search quality, Google is designed to scale cost effectively to the size of the Web as it grows. One aspect of this is to use storage efficiently. Table 1 has a breakdown of some statistics and storage requirements of Google. Due to compression the total size of the repository is about 53 GB, just over one third of the total data it stores. At current disk prices this makes the repository a relatively cheap source of useful data. More importantly, the total of all the data used by the search engine requires a comparable amount of storage, about 55 GB. Furthermore, most queries can be answered using just the short inverted index. With better encoding and compression of the Document Index, a high quality web search engine may fit onto a 7GB drive of a new PC. + Storage Statistics @@ -282,12 +282,12 @@ Lexicon 293 MB -Temporary Anchor Data  +Temporary Anchor Data (not in total) 6.6 GB -Document Index Incl.  +Document Index Incl. Variable Width Data 9.7 GB @@ -322,11 +322,11 @@ Number of 404's 1.6 million -  + Table 1. Statistics -   + ###  5.2 System Performance @@ -336,11 +336,11 @@ It is important for a search engine to crawl and index efficiently. This way inf Improving the performance of search was not the major focus of our research up to this point. The current version of Google answers most queries in between 1 and 10 seconds. This time is mostly dominated by disk IO over NFS (since disks are spread over a number of machines). Furthermore, Google does not have any optimizations such as query caching, subindices on common terms, and other common optimizations. We intend to speed up Google considerably through distribution and hardware, software, and algorithmic improvements. Our target is to be able to handle several hundred queries per second. Table 2 has some sample query times from the current version of Google. They are repeated to show the speedups resulting from cached IO. -  + **Initial Query** -**Same Query Repeated (IO mostly cached)** +**Same Query Repeated (IO mostly cached)** **Query** @@ -392,11 +392,11 @@ search engines 1.16 -  + Table 2. Search Times -   + ## 6 Conclusions @@ -416,4 +416,4 @@ Aside from the quality of search, Google is designed to scale. It must be effici ### 6.4 A Research Tool -In addition to being a high quality search engine, Google is a research tool. The data Google has collected has already resulted in many other papers submitted to conferences and many more on the way. Recent research such as \[[Abiteboul 97](http://infolab.stanford.edu/~backrub/google.html#ref)\] has shown a number of limitations to queries about the Web that may be answered without having the Web available locally. This means that Google (or a similar system) is not only a valuable research tool but a necessary one for a wide range of applications. We hope Google will be a resource for searchers and researchers all around the world and will spark the next generation of search engine technology. \ No newline at end of file +In addition to being a high quality search engine, Google is a research tool. The data Google has collected has already resulted in many other papers submitted to conferences and many more on the way. Recent research such as \[[Abiteboul 97](http://infolab.stanford.edu/~backrub/google.html#ref)\] has shown a number of limitations to queries about the Web that may be answered without having the Web available locally. This means that Google (or a similar system) is not only a valuable research tool but a necessary one for a wide range of applications. We hope Google will be a resource for searchers and researchers all around the world and will spark the next generation of search engine technology. diff --git a/src/pages/docs/docs/markdown.md b/src/pages/docs/docs/markdown.md index d837d55f0f..00c61fc2a4 100644 --- a/src/pages/docs/docs/markdown.md +++ b/src/pages/docs/docs/markdown.md @@ -31,24 +31,24 @@ The basics of markdown can be found [here](https://guides.github.com/features/ma ### `left` alignment - + This is the code you need to align images to the left: ``` - + ``` --- ### `right` alignment - + This is the code you need to align images to the right: ``` - + ``` --- diff --git a/src/sass/imports/_buttons.scss b/src/sass/imports/_buttons.scss index 62ddb3b702..e8aaed16f5 100644 --- a/src/sass/imports/_buttons.scss +++ b/src/sass/imports/_buttons.scss @@ -3,50 +3,44 @@ align-items: center; background: #000000; border: 0; - border-radius: 1.95em; + border-radius: 1.75em; box-shadow: none; box-sizing: border-box; color: #fff; cursor: pointer; display: -ms-inline-flexbox; display: inline-flex; - font-size: 1em; + font-size: 0.875em; font-weight: bold; -ms-flex-pack: center; justify-content: center; letter-spacing: 0.035em; - line-height: 1.6; + line-height: 1.2; opacity: 1; - padding: 1.2em 2.14285em; + padding: 0.9em 2.14285em; text-decoration: none; - -webkit-transition: 0.3s ease; - transition: 0.3s ease; + -webkit-transition: .3s ease; + transition: .3s ease; vertical-align: middle; - -webkit-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.3), - 0 0 40px rgba(0, 0, 0, 0.1) inset; - -moz-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.3), - 0 0 40px rgba(0, 0, 0, 0.1) inset; - box-shadow: 0 1px 4px rgba(0, 0, 0, 0.3), - 0 0 40px rgba(0, 0, 0, 0.1) inset; &:hover, &:focus, &:active { color: #fff; - opacity: 0.8; + opacity: .8; outline: 0; } } .button-secondary { background: 0 !important; - box-shadow: inset 0 0 0 3px currentColor; + box-shadow: inset 0 0 0 2px currentColor; color: $color-primary; &:hover, &:focus, &:active { - box-shadow: inset 0 0 0 4px currentColor; + box-shadow: inset 0 0 0 3px currentColor; color: $color-primary; opacity: 1; } @@ -55,8 +49,8 @@ .button-icon { background: 0 !important; border: 0; - color: black; - font-size: 1.2em; + color: inherit; + font-size: 1em; font-weight: normal; letter-spacing: normal; padding: 0.25em; @@ -92,4 +86,4 @@ &:active { outline: 0; } -} +} \ No newline at end of file diff --git a/src/sass/imports/_forms.scss b/src/sass/imports/_forms.scss index d03cf35dae..4bc74865f1 100644 --- a/src/sass/imports/_forms.scss +++ b/src/sass/imports/_forms.scss @@ -4,8 +4,8 @@ label { line-height: 1.5; margin-bottom: 0.25em; - input[type='checkbox']+&, - input[type='radio']+& { + input[type=checkbox] + &, + input[type=radio] + & { font-weight: normal; cursor: pointer; padding-left: 0.25em; @@ -13,13 +13,13 @@ label { } } -input[type='text'], -input[type='password'], -input[type='email'], -input[type='tel'], -input[type='number'], -input[type='search'], -input[type='url'], +input[type="text"], +input[type="password"], +input[type="email"], +input[type="tel"], +input[type="number"], +input[type="search"], +input[type="url"], select, textarea { background: #fff; diff --git a/src/sass/imports/_functions.scss b/src/sass/imports/_functions.scss index 6b80e55055..1f996f61aa 100644 --- a/src/sass/imports/_functions.scss +++ b/src/sass/imports/_functions.scss @@ -5,14 +5,3 @@ } @return $map; } - - -//----------------experimental------------------ - - - - - - - - diff --git a/src/sass/imports/_helpers.scss b/src/sass/imports/_helpers.scss index 7a0e03546f..2351f06b54 100644 --- a/src/sass/imports/_helpers.scss +++ b/src/sass/imports/_helpers.scss @@ -16,7 +16,7 @@ &:after { background: $color-primary; display: block; - content: ''; + content: ""; height: 100%; left: -1px; position: absolute; @@ -61,7 +61,7 @@ .has-gradient { background: $color-primary; background: -webkit-gradient(linear, left top, right top, from($color-secondary), to($color-primary)); - background: linear-gradient(to right, $color-secondary, $color-primary); + background: linear-gradient(to right,$color-secondary, $color-primary); color: #fff; position: relative; @@ -79,10 +79,10 @@ color: inherit !important; &:hover { - opacity: 0.8; - } - } + opacity: .8; } + } + } .button { &:not(.button-secondary) { @@ -92,10 +92,10 @@ &:hover, &:focus, &:active { - opacity: 0.85; - } - } + opacity: .85; } + } + } .button-secondary { color: #fff !important; @@ -108,8 +108,8 @@ // Background image .bg-img { - -webkit-animation: fadeIn20 0.75s ease-in-out; - animation: fadeIn20 0.75s ease-in-out; + -webkit-animation: fadeIn20 .75s ease-in-out; + animation: fadeIn20 .75s ease-in-out; background-position: center; background-size: cover; bottom: 0; @@ -123,9 +123,7 @@ // Grid .grid { display: -ms-flexbox; - // box-shadow: inset 0px 11px 8px -10px black, inset 0px -11px 8px -10px black; margin: 0 auto !important; display: flex; - border-radius: 5px; -ms-flex-wrap: wrap; flex-wrap: wrap; margin-left: -$grid-gap / 2; @@ -134,7 +132,6 @@ .grid-item { box-sizing: border-box; - // box-shadow: inset 0px 11px 8px -10px black, inset 0px -11px 8px -10px black; margin: 0 auto !important; padding-left: $grid-gap / 2; padding-right: $grid-gap / 2; position: relative; diff --git a/src/sass/imports/_icons.scss b/src/sass/imports/_icons.scss index 4e36651076..07a5dc1084 100644 --- a/src/sass/imports/_icons.scss +++ b/src/sass/imports/_icons.scss @@ -1,7 +1,7 @@ // SVG icons .icon { color: inherit; - fill: white; + fill: currentColor; flex-shrink: 0; height: 1em; line-height: 1; @@ -11,7 +11,7 @@ // CSS icons .icon-menu, .icon-close { - background: white; + background: currentColor; border-radius: 1px; color: inherit; height: 2px; @@ -24,9 +24,9 @@ &:before, &:after { - background: white; + background: currentColor; border-radius: 1px; - content: ''; + content: ""; height: 100%; left: 0; position: absolute; @@ -64,17 +64,17 @@ .icon-angle-right { background: 0; - border-color: rgb(0, 0, 0); - border-style: dashed; - border-width: 2px 1px 0 0; + border-color: currentColor; + border-style: solid; + border-width: 1px 1px 0 0; box-sizing: border-box; - height: 6px; - left: 70%; - margin-left: -2px; - margin-top: -2px; + height: 8px; + left: 50%; + margin-left: -4px; + margin-top: -4px; position: absolute; top: 50%; - width: 6px; + width: 8px; -webkit-transform: rotate(45deg); transform: rotate(45deg); } diff --git a/src/sass/imports/_palettes.scss b/src/sass/imports/_palettes.scss index ac2cc9874b..217e3b06ed 100644 --- a/src/sass/imports/_palettes.scss +++ b/src/sass/imports/_palettes.scss @@ -1,57 +1,57 @@ @each $palette in map-keys($theme-palettes) { - $palette-suffix: '#{$palette}'; - $color-primary: map-deep-get($theme-palettes, $palette, 'primary'); - $color-secondary: map-deep-get($theme-palettes, $palette, 'secondary'); + $palette-suffix: "#{$palette}"; + $color-primary: map-deep-get($theme-palettes, $palette, "primary"); + $color-secondary: map-deep-get($theme-palettes, $palette, "secondary"); .palette-#{$palette-suffix} { a:not(.button) { color: $color-primary; - &:hover { - color: $gray-600; - } - } + &:hover { + color: $gray-600; + } + } - blockquote { - border-color: $color-primary; - } + blockquote { + border-color: $color-primary; + } - .line-left:after, - .button { - background: $color-primary; - } + .line-left:after, + .button { + background: $color-primary; + } - .has-gradient { - background: $color-primary; - background: -webkit-gradient(linear, left top, right top, from($color-secondary), to($color-primary)); - background: linear-gradient(to right, $color-secondary, $color-primary); - } + .has-gradient { + background: $color-primary; + background: -webkit-gradient(linear, left top, right top, from($color-secondary), to($color-primary)); + background: linear-gradient(to right,$color-secondary, $color-primary); + } - .button-secondary, - .button-icon:hover, - .button-icon:focus, - .button-icon:active, - .has-gradient .button:not(.button-secondary), - .menu-item.current, - #masthead a:not(.button):hover, - #colophon a:not(.button):hover, - .post.type-docs .hash-link:hover, - .post.type-docs .hash-link:focus, - #docs-menu a:hover, - #docs-menu .current, - #docs-menu .current-parent, - #page-nav li.active > a, - #page-nav a:hover { - color: $color-primary; - } + .button-secondary, + .button-icon:hover, + .button-icon:focus, + .button-icon:active, + .has-gradient .button:not(.button-secondary), + .menu-item.current, + #masthead a:not(.button):hover, + #colophon a:not(.button):hover, + .post.type-docs .hash-link:hover, + .post.type-docs .hash-link:focus, + #docs-menu a:hover, + #docs-menu .current, + #docs-menu .current-parent, + #page-nav li.active > a, + #page-nav a:hover { + color: $color-primary; + } - #docs-section-items { - .docs-item-link { - &:hover { - border-color: $color-primary; - color: $color-primary; - } - } + #docs-section-items { + .docs-item-link { + &:hover { + border-color: $color-primary; + color: $color-primary; } + } } + } } diff --git a/src/sass/imports/_prism.scss b/src/sass/imports/_prism.scss index e7e01c3c8d..046e2c39e1 100644 --- a/src/sass/imports/_prism.scss +++ b/src/sass/imports/_prism.scss @@ -6,8 +6,8 @@ https://prismjs.com/download.html#themes=prism&languages=markup+css+clike+javasc * @author Lea Verou */ -code[class*='language-js'], -pre[class*='language-js'] { +code[class*="language-"], +pre[class*="language-"] { -moz-tab-size: 4; -o-tab-size: 4; tab-size: 4; @@ -17,7 +17,7 @@ pre[class*='language-js'] { hyphens: none; } -:not(pre)>code[class*='language-js'] { +:not(pre) > code[class*="language-"] { background: $gray-700; color: $gray-100; } @@ -34,7 +34,7 @@ pre[class*='language-js'] { } .token.namespace { - opacity: 0.7; + opacity: .7; } .token.property, @@ -100,22 +100,22 @@ div.code-toolbar { position: relative; } -div.code-toolbar>.toolbar { +div.code-toolbar > .toolbar { position: absolute; top: 0; right: 0; } -div.code-toolbar>.toolbar .toolbar-item { +div.code-toolbar > .toolbar .toolbar-item { display: block; } -div.code-toolbar>.toolbar a { +div.code-toolbar > .toolbar a { border: 0; cursor: pointer; } -div.code-toolbar>.toolbar button { +div.code-toolbar > .toolbar button { background: none; border: 0; border-radius: 0; @@ -125,20 +125,19 @@ div.code-toolbar>.toolbar button { line-height: normal; overflow: visible; padding: 0; - user-select: none; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; } -div.code-toolbar>.toolbar a, -div.code-toolbar>.toolbar button, -div.code-toolbar>.toolbar span { +div.code-toolbar > .toolbar a, +div.code-toolbar > .toolbar button, +div.code-toolbar > .toolbar span { background: $gray-600; color: $gray-300 !important; display: block; - font-size: 0.75em; + font-size: .75em; line-height: 1.5; - padding: 0.25em 0.5em; + padding: .25em .5em; text-decoration: none; } diff --git a/stackbit.yaml b/stackbit.yaml index 97d1aa3e17..2dc7537f26 100644 --- a/stackbit.yaml +++ b/stackbit.yaml @@ -1,7 +1,6 @@ stackbitVersion: ~0.3.0 ssgName: gatsby ssgVersion: 2.3.30 -nodeVersion: 14 buildCommand: npm run build publishDir: public staticDir: static @@ -572,4 +571,3 @@ models: default: name - type: boolean name: relativeUrl -