{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Lab3.ipynb", "provenance": [], "collapsed_sections": [] }, "kernelspec": { "display_name": "Python [bayes]", "language": "python", "name": "Python [bayes]" } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "7i3Q_9X59dIQ", "colab_type": "text" }, "source": [ "Probabilistic Programming\n", "=====\n", "and Bayesian Methods for Hackers \n", "========\n", "\n", "##### Version 0.1\n", "\n", "`Original content created by Cam Davidson-Pilon`\n", "\n", "`Ported to Python 3 and PyMC3 by Max Margenot (@clean_utensils) and Thomas Wiecki (@twiecki) at Quantopian (@quantopian)`\n", "___\n", "\n", "\n", "Welcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](https://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!" ] }, { "cell_type": "markdown", "metadata": { "id": "TEy20iK49dIR", "colab_type": "text" }, "source": [ "Chapter 1\n", "======\n", "***" ] }, { "cell_type": "markdown", "metadata": { "id": "DQ0DhRB79dIS", "colab_type": "text" }, "source": [ "The Philosophy of Bayesian Inference\n", "------\n", " \n", "> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n", "\n", "If you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. " ] }, { "cell_type": "markdown", "metadata": { "id": "-YJlwWtl9dIU", "colab_type": "text"
Get Free Quote!
439 Experts Online