Skip to content

Commit

Permalink
representation section first draft
Browse files Browse the repository at this point in the history
  • Loading branch information
kuleshov committed Mar 14, 2016
1 parent a8f6ede commit c9a5795
Show file tree
Hide file tree
Showing 19 changed files with 295 additions and 22 deletions.
2 changes: 1 addition & 1 deletion _includes/header.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
<!-- {% endif %} -->
<!-- {% endunless %} -->
<!-- {% endfor %} -->
<a href="{{ site.baseurl }}">Contents</a>
<a href="{{ site.baseurl }}/">Contents</a>
<a href="http://cs.stanford.edu/~ermon/cs228/index.html">Class</a>
<a href="#">Github</a>
</nav>
Expand Down
1 change: 1 addition & 0 deletions _layouts/post.html
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ <h1>{{ page.title | capitalize }}</h1>
tp: "\\tilde p",
pt: "p_\\theta",
Exp: "{\\mathbb{E}}",
Ind: "{\\mathbb{I}}",
KL: "{\\mathbb{KL}}",
Dc: "{\\mathcal{D}}",
note: ["\\textcolor{blue}{[NOTE: #1]}",1]
Expand Down
Binary file added assets/img/3node-bayesnets.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/cutset.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/dsep1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/dsep2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/full-bayesnet.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/grade-model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/markovblanket.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/moralization.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/mrf-bn-comparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/mrf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/mrf2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/img/ocr.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 3 additions & 8 deletions css/tufte.scss
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ h1 { font-weight: 400;
margin-bottom: 1.568rem;
font-size: 2.5rem;
line-height: 0.784; }
//

// h2 { font-style: italic;
// font-weight: 400;
// margin-top: 1.866666666666667rem;
Expand All @@ -138,8 +138,8 @@ h1 { font-weight: 400;

h2 { font-style: italic;
font-weight: 400;
margin-top: 2.1rem;
margin-bottom: 0;
margin-top: 4rem;
margin-bottom: 1rem;
font-size: 2.2rem;
line-height: 1; }

Expand Down Expand Up @@ -277,11 +277,6 @@ section { padding-top: 1rem;

p, ol, ul { font-size: 1.4rem; }

// ol { width: 50%;
// -webkit-padding-start: 5%;
// -webkit-padding-end: 5%;
// }

ul { width: 45%;
-webkit-padding-start: 5%;
-webkit-padding-end: 5%;
Expand Down
20 changes: 10 additions & 10 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: post
title: Contents
---
{% newthought 'These notes'%} are meant to form a concise introductory course on probabilistic graphical modeling{% sidenote 1 'Probabilistic graphical modeling is a subfield of AI that studies how to model the world with probability distributions and then use these distributions to reason about ones decisions.'%}.
{% newthought 'These notes'%} form a concise introductory course on probabilistic graphical modeling{% sidenote 1 'Probabilistic graphical modeling is a subfield of AI that studies how to model the world with probability distributions.'%}.
They accompany and are based on the material of [CS228](cs.stanford.edu/~ermon/cs228/index.html), the graphical models course at Stanford University, taught by [Stefano Ermon](cs.stanford.edu/~ermon/).

The notes are written and maintained by [Volodymyr Kuleshov](www.stanford.edu/~kuleshov); contact me with any feedback and feel free to contribute your improvements on [Github](https://github.com/kuleshov/cs228-notes).
Expand All @@ -12,28 +12,28 @@ This site is currently under construction, but come back soon as we get more mat

1. [Introduction](preliminaries/introduction/) What is probabilistic graphical modeling? Overview of the course.

2. [Review of probability theory](#): Probability distributions. Conditional probability. Random variables.
2. Review of probability theory: Probability distributions. Conditional probability. Random variables.

2. [Examples of real-world applications](#): Image denoising. RNA structure prediciton. Lexical analyses of sentences. Optical character recogition.
2. Examples of real-world applications: Image denoising. RNA structure prediciton. Lexical analyses of sentences. Optical character recogition.

## Representation

1. [Bayesian networks](#): Definitions. Representations via directed graphs. Independencies in directed models.
1. [Bayesian networks](representation/directed/): Definitions. Representations via directed graphs. Independencies in directed models.

2. [Markov random fields](#): Undirected vs directed models. Independencies in undirected models. Conditional random fields.
2. [Markov random fields](representation/undirected/): Undirected vs directed models. Independencies in undirected models. Conditional random fields.

## Inference

1. [Variable elimination](#): The inference problem. Variable elimination. Complexity of inference.

2. [Belief propagation](#): The junction tree algorithm. Exact inference in arbitrary graphs.
2. Belief propagation: The junction tree algorithm. Exact inference in arbitrary graphs.

3. [Sampling-based inference](#): Monte-Carlo sampling. Importance sampling. Markov Chain Monte-Carlo. Applications in inference.
3. Sampling-based inference: Monte-Carlo sampling. Importance sampling. Markov Chain Monte-Carlo. Applications in inference.

## Learning

1. [Learning in directed models](#): Undirected graphs. Independencies in undirected models. Conditional random fields.
1. Learning in directed models: Undirected graphs. Independencies in undirected models. Conditional random fields.

2. [Learning in undirected models](#): Undirected graphs. Independencies in undirected models. Conditional random fields.
2. Learning in undirected models: Undirected graphs. Independencies in undirected models. Conditional random fields.

3. [Structure learning](#): Undirected graphs. Independencies in undirected models. Conditional random fields.
3. Structure learning: Undirected graphs. Independencies in undirected models. Conditional random fields.
6 changes: 3 additions & 3 deletions preliminaries/introduction/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Building probabilistic models turns out to be a complex and fascinating problem.
Probabilistic modeling is also deeply grounded in reality and has countless real-world applications in fields as diverse as medicine, language processing, vision, physics, and many others.
It is very likely that at least half a dozen applications currently running on your computer are using graphical models internally.

This combination of beautiful theory and powerful applications makes graphical models one of the most fascinating topics in modern artificial intelligence and computer science{% sidenote 1 'Indeed, the 2011 Turing award (concidered to be the "Nobel prize of computer science" was recently awarded to [Judea Pearl](http://amturing.acm.org/award_winners/pearl_2658896.cfm) for settling the foundations of probabilistic graphical modeling.'%}.
This combination of beautiful theory and powerful applications makes graphical models one of the most fascinating topics in modern artificial intelligence and computer science{% sidenote 1 'Indeed, the 2011 Turing award (concidered to be the "Nobel prize of computer science") was recently awarded to [Judea Pearl](http://amturing.acm.org/award_winners/pearl_2658896.cfm) for settling the foundations of probabilistic graphical modeling.'%}.

## Probabilistic modeling

Expand Down Expand Up @@ -64,7 +64,7 @@ Each factor {%m%}p(x_i | y){%em%} can be completely described by a small number

## Describing probabilities with graphs

Our independence assumption can be conveniently represented in the form of a graph. {% marginfigure 'nb1' 'assets/img/naive-bayes.png' 'Graphical representation of the Naive Bayes spam classification model. We can interpret the directed graph as indicating a story of how the data was generated: first, we a spam/non-spam label was chosen at random; then a subset of $$n$$ possible English words sampled independently and at random.' %}.
Our independence assumption can be conveniently represented in the form of a graph.{% marginfigure 'nb1' 'assets/img/naive-bayes.png' 'Graphical representation of the Naive Bayes spam classification model. We can interpret the directed graph as indicating a story of how the data was generated: first, we a spam/non-spam label was chosen at random; then a subset of $$n$$ possible English words sampled independently and at random.' %}
This representation has the immediate advantage of being easy to understand. It can be interpreted as telling us a story: an email was generated by first choosing at random whether the email is spam or not (indicated by $$y$$), and then by sampling words one at a time. Conversely, if we have a story of how our dataset was generated, we can naturally express it as a graph; then, by a series of simple rules, we can translate a graph into a probability distribution that admits a compact parametrization.

More importantly, we will eventually be interested in asking the model questions (e.g. what is the probability of spam given that I see these words?); answering these questions will require specialized algorithms that will be most naturally defined on the graph describing that probability and will be closely related to various graph algorithms. Also, we will be able to describe independence properties of a probabilistic model in terms of graph-theoretic concepts (e.g. in terms of node connectivity).
Expand All @@ -73,7 +73,7 @@ This brief discussion is meant to emphasize one take-away points: there is an in

## A bird's eye overview of the course

Our discussion of graphical models will be divided into three major parts: representation (how to specify a model), inference (how to ask the model questions), and learning (how to fit a model to real-world data). These three themes will also be closely linked: to derive efficient inference and learning algorithms, the model will need to be adequately represented; furthermore, learning models will require inference as a subroutine. Thus, it will best to always keep the three tasks in mind, rather than focusing on them in isolation.
Our discussion of graphical models will be divided into three major parts: representation (how to specify a model), inference (how to ask the model questions), and learning (how to fit a model to real-world data). These three themes will also be closely linked: to derive efficient inference and learning algorithms, the model will need to be adequately represented; furthermore, learning models will require inference as a subroutine. Thus, it will best to always keep the three tasks in mind, rather than focusing on them in isolation{% sidenote 1 'For a more detailed overview, see this [writeup](https://docs.google.com/file/d/0B_hicYJxvbiOc1ViZTRxbnhSU1cza1VhOFlhRlRuQQ/edit) by Neal Parikh; this part of the notes is based on it.'%}.

### Representation

Expand Down
Loading

0 comments on commit c9a5795

Please sign in to comment.