44

I'm using Ipython Notebook to my research. As my file grows bigger, I constantly extract code out, things like plot method, fitting method etc.

I think I need a way to organize this. Is there any good way to do it??


Currently, I do this by:

data/
helpers/
my_notebook.ipynb
import_file.py

I store data at data/, and extract helper method into helpers/, and divide them into files like plot_helper.py, app_helper.py, etc.

I summarize the imports in import_file.py,

from IPython.display import display

import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
from matplotlib import pyplot as plt
import sklearn
import re

And then I can import everything I need in .ipynb at top cell as

enter image description here

The structure can be seen at https://github.com/cqcn1991/Wind-Speed-Analysis

One problem I have right now is that I have too many submodule at helpers/, and it's hard to think which method should be put into which file.

I think a possible way is to organize in pre-processing, processing, post-processing.

UPDATE:

My big jupyter research notebook: https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html

The top cell is standard import + magic + extentions

%matplotlib inline
%load_ext autoreload
%autoreload 2

from __future__ import division
from import_file import *
load_libs()
ZK Zhao
  • 19,885
  • 47
  • 132
  • 206
  • My big jupyter research notebook: https://cdn.rawgit.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html does not work. – Tejas Shetty Jun 12 '21 at 17:24
  • https://raw.githubusercontent.com/cqcn1991/Wind-Speed-Analysis/master/output_HTML/marham.html also does not work – Tejas Shetty Jun 12 '21 at 17:27
  • https://github.com/cqcn1991/Wind-Speed-Analysis works – Tejas Shetty Jun 12 '21 at 17:27
  • This new project [codepod.io](https://codepod.io/) looks like a really different way to approach this: "Canvas-based Scalable Interactive Coding. CodePod.io is an open-source canvas-based coding IDE that helps programmers develop large, production-ready projects faster by presenting interactive coding (e.g., Jupyter) on a hierarchical, scoped, 2D canvas." – Wayne May 10 '23 at 15:27

5 Answers5

56

There are many ways to organise ipython research project. I am managing a team of 5 Data Scientists and 3 Data Engineers and I found those tips to be working well for our usecase:

This is a summary of my PyData London talk:

http://www.slideshare.net/vladimirkazantsev/clean-code-in-jupyter-notebook

1. Create a shared (multi-project) utils library

You most likely have to reuse/repeat some code in different research projects. Start refactoring those things into "common utils" package. Make setup.py file, push module to github (or similar), so that team members can "pip install" it from VCS.

Examples of functionality to put in there are:

  • Data Warehouse or Storage access functions
  • common plotting functions
  • re-usable math/stats methods

2. Split your fat master notebook into smaller notebooks

In my experience, the good length of file with code (any language) is only few screens (100-400 lines). Jupyter Notebook is still the source file, but with output! Reading a notebook with 20+ cells is very hard. I like my notebooks to have 4-10 cells max.

Ideally, each notebook should have one "hypothesis-data-conclusions" triplet.

Example of splitting the notebook:

1_data_preparation.ipynb

2_data_validation.ipynb

3_exploratory_plotting.ipynb

4_simple_linear_model.ipynb

5_hierarchical_model.ipynb

playground.ipynb

Save output of 1_data_preparation.ipynb to pickle df.to_pickle('clean_data.pkl'), csv or fast DB and use pd.read_pickle("clean_data.pkl") at the top of each notebook.

3. It is not Python - it is IPython Notebook

What makes notebook unique is cells. Use them well. Each cell should be "idea-execution-output" triplet. If cell does not output anything - combine with the following cell. Import cell should output nothing -this is an expected output for it.

If cell have few outputs - it may be worth splitting it.

Hiding imports may or may not be good idea:

from myimports import *

Your reader may want to figure out what exactly you are importing to use the same stuff for her research. So use with caution. We do use it for pandas, numpy, matplotlib, sql however.

Hiding "secret sauce" in /helpers/model.py is bad:

myutil.fit_model_and_calculate(df)

This may save you typing and you will remove duplicate code, but your collaborator will have to open another file to figure out what's going on. Unfortunately, notebook (jupyter) is quite inflexible and basic environment, but you still don't want to force your reader to leave it for every piece of code. I hope that in the future IDE will improve, but for now, keep "secret sauce" inside a notebook. While "boring and obvious utils" - wherever you see fit. DRY still apply - you have to find the balance.

This should not stop you from packaging re-usable code into functions or even small classes. But "flat is better than nested".

4. Keep notebooks clean

You should be able to "reset & Run All" at any point in time.

Each re-run should be fast! Which means you may have to invest in writing some caching functions. May be you even want to put those into your "common utils" module.

Each cell should be executable multiple times, without the need to re-initialise the notebook. This saves you time and keep the code more robust. But it may depend on state created by previous cells. Making each cell completely independent from the cells above is an anti-pattern, IMO.

After you are done with research - you are not done with notebook. Refactor.

5. Create a project module, but be very selective

If you keep re-using plotting or analytics function - do refactor it into this module. But in my experience, people expect to read and understand a notebook, without opening multiple util sub-modules. So naming your sub-routines well is even more important here, compared to normal Python.

"Clean code reads like well written prose" Grady Booch (developer of UML)

6. Host Jupyter server in the cloud for the entire team

You will have one environment, so everyone can quickly review and validate research without the need to match the environment (even though conda makes this pretty easy).

And you can configure defaults, like mpl style/colors and make matplot lib inline, by default:

In ~/.ipython/profile_default/ipython_config.py

Add line c.InteractiveShellApp.matplotlib = 'inline'

7. (experimental idea) Run a notebook from another notebook, with different parameters

Quite often you may want to re-run the whole notebook, but with a different input parameters.

To do this, you can structure your research notebook as following: Place params dictionary in the first cell of "source notebook".

params = dict(platform='iOS', 
              start_date='2016-05-01', 
              retention=7)
df = get_data(params ..)
do_analysis(params ..)

And in another (higher logical level) notebook, execute it using this function:

def run_notebook(nbfile, **kwargs):
    """
    example:
    run_notebook('report.ipynb', platform='google_play', start_date='2016-06-10')
    """

    def read_notebook(nbfile):
        if not nbfile.endswith('.ipynb'):
            nbfile += '.ipynb'

        with io.open(nbfile) as f:
            nb = nbformat.read(f, as_version=4)
        return nb

    ip = get_ipython()
    gl = ip.ns_table['user_global']
    gl['params'] = None
    arguments_in_original_state = True

    for cell in read_notebook(nbfile).cells:
        if cell.cell_type != 'code':
            continue
        ip.run_cell(cell.source)

        if arguments_in_original_state and type(gl['params']) == dict:
            gl['params'].update(kwargs)
            arguments_in_original_state = False

Whether this "design pattern" proves to be useful is yet to be seen. We had some success with it - at least we stopped duplicating notebooks only to change few inputs.

Refactoring the notebook into a class or module break quick feedback loop of "idea-execute-output" that cells provide. And, IMHO, is not "ipythonic"..

8. Write (unit) tests for shared library in notebooks and run with py.test

There is a Plugin for py.test that can discover and run tests inside notebooks!

https://pypi.python.org/pypi/pytest-ipynb

volodymyr
  • 7,256
  • 3
  • 42
  • 45
  • This is very useful, and I have something similar , for example, `import * ` . But I add more to that in the top. See the update in question. Also, I thought about splitting the notebook into several parts. After all, notebook are made up of cells. Just to split them into several pages is easier for navigation. But that require heavy work on rewriting the source code of notebook. – ZK Zhao Jul 05 '16 at 04:20
  • In my experience - splitting pays off quickly. Shorter notebooks are much quicker to "restart&runAll". Also - you can work in parallel. Refactoring is neccacary step anyway - otherwise you would be very hard to come back to your research later. – volodymyr Jul 05 '16 at 09:00
  • Also, how to pass variables between notebooks? I mean, several notebooks may work on one `dataframe`, one may clean its data, one may use it to build the mode. How to deal with this problem? – ZK Zhao Jul 10 '16 at 05:05
  • 2
    in "1_data_prep.ipynb" - df.to_pickle('clean_data.pkl'). In other notebooks - df = pd.read_pickle('clean_data.pkl'). You can, of course, use more suitable fileformat, like HDF5 using pytables. – volodymyr Jul 10 '16 at 11:59
  • Thanks! very useful roadmap – physiker May 21 '19 at 14:01
  • One problem with the approach of separating utils into a library is that you can't rapidly develop new functions into that package / adapt functions easily. But I guess developing inside notebook and then moving out to utils library once a function is finalized is probably a fine approach. – Marthinus Bosman Aug 12 '22 at 12:25
5

While the given answers cover the topic thoroughly it is still worth mentioning Cookiecutter which provides a data science boilerplate project structure:

Cookiecutter Data Sciencee

provides data science template for projects in Python with a logical, reasonably standardized, yet flexible project structure for doing and sharing data science work.

Your analysis doesn't have to be in Python, but the template does provide some Python boilerplate (in the src folder for example, and the Sphinx documentation skeleton in docs). However, nothing is binding.

The following quote from the project description sums it up pretty nicely:

Nobody sits around before creating a new Rails project to figure out where they want to put their views; they just run rails new to get a standard project skeleton like everybody else.

Requirements:

  • Python 2.7 or 3.5
  • cookiecutter Python package >= 1.4.0: pip install cookiecutter

Getting started

Starting a new project is as easy as running this command at the command line. No need to create a directory first, the cookiecutter will do it for you.

cookiecutter https://github.com/drivendata/cookiecutter-data-science

Directory structure

├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results-oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.testrun.org

Related:

ProjectTemplate - provides a similar system for R data analysis.

wp78de
  • 18,207
  • 7
  • 43
  • 71
  • Yeah but one has to learn how to write ReStructured text for effective use of Sphinx. One can use markdown using some hacks, but I think it is not so easy to use numpy style docstrings. If I am wrong please do correct me. – Tejas Shetty Jun 12 '21 at 15:57
  • And I really could not figure out tox. Maybe I am dumb? – Tejas Shetty Jun 12 '21 at 16:18
  • Maybe we need to have different cookie cutters for computational theoreticians and experimentalists who have analysed their data. – Tejas Shetty Jun 12 '21 at 16:18
3

You should ideally have a library hierarchy. I would organize it as follows:

Package wsautils

Fundamental, lowest level package [No dependencies]

stringutils.py: Contains the most basic files such string manipulation dateutils.py: Date manipulation methods

Package wsadata

  • Parsing data, dataframe manipulations, helper methods for Pandas etc.
  • Depends on [wsautils]
    • pandasutils.py
    • parseutils.py
    • jsonutils.py [this could also go in wsautils]
    • etc.

Package wsamath (or wsastats)

Math related utilities, models, PDF, CDFs [Depends on wsautils, wsadata] Contains: - probabilityutils.py - statutils.py etc.

Package wsacharts [or wsaplot]

  • GUI, Plotting, Matplotlib, GGplot etc
  • Depends on [wsautils, wsamath]
    • histogram.py
    • pichart.py
    • etc. Just an idea, you could also just have a single file here called chartutils or something

You get the idea. Create more libraries as necessary without making too many.

Few other tips:

  • Follow the principles of good python package management thoroughly. Read this http://python-packaging-user-guide.readthedocs.org/en/latest/installing/
  • Enforce strict dependency management via a script or a tool such that there are no circular dependencies between packages
  • Define the name and purpose of each library/module well so that other users also can intuitively tell where a method/utility should go
  • Follow good python coding standards (see PEP-8)
  • Write test cases for every library/package
  • Use a good editor (PyCharm is a good one for Python/iPython)
  • Document your APIs, methods

Finally, remember that there are many ways to skin a cat and the above is just one that I happen to like. HTH.

Sid
  • 7,511
  • 2
  • 28
  • 41
1

Strange that no one mentioned this. Write out your next project using nbdev. From the docs, we have

Features of Nbdev

nbdev provides the following tools for developers:

  • Automatically generate docs from Jupyter notebooks. These docs are searchable and automatically hyperlinked to appropriate documentation pages by introspecting keywords you surround in backticks.
  • Utilities to automate the publishing of PyPI and conda packages including version number management.
  • A robust, two-way sync between notebooks and source code, which allow you to use your IDE for code navigation or quick edits if desired.
  • Fine-grained control on hiding/showing cells: you can choose to hide entire cells, just the output, or just the input. Furthermore, you can embed cells in collapsible elements that are open or closed by default.
  • Ability to write tests directly in notebooks without having to learn special APIs. These tests get executed in parallel with a single CLI command. You can even define specific groups of tests such that you don't always have to run long-running tests.
  • Tools for merge/conflict resolution with notebooks in a human readable format.
  • Continuous integration (CI) comes with GitHub Actions set up for you out of the box, that will run tests automatically for you. Even if you are not familiar with CI or GitHub Actions, this starts working right away for you without any manual intervention.
  • Integration With GitHub Pages for docs hosting: nbdev allows you to easily host your documentation for free, using GitHub pages.
  • Create Python modules, following best practices such as automatically defining __all__ (more details) with your exported functions, classes, and variables.
  • Math equation support with LaTeX.
  • ... and much more! See the Getting Started section for more information.

For a quick start

  • The tutorial.
  • A minimal, end-to-end example of using nbdev. I suggest replicating this example after reading through the tutorial to solidify your understanding.
  • use the nbdev_template
  • Wonder why you did not try this out earlier, even after knowing about it 1.5 years ago (like me).

If you like videos

If the video links fail, search the titles on Youtube to get them. Also, follow all guidelines as in volodymyr's answer above

All these comments were specific to notebooks. For any code, you have to

  • Write tests ( before or at least after you write code)
  • add documentation for functions (preferably numpy style since this is a scientific package)
  • Share it when you publish a paper so that others need not reinvent the wheel (Especially those who work in Physics).
Tejas Shetty
  • 685
  • 6
  • 30
0

If you hate notebooks, try out these cookiecutters

Tejas Shetty
  • 685
  • 6
  • 30