Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
95 commits
Select commit Hold shift + click to select a range
3ecbdae
Added deconv to InnerProductLayer, SoftmaxLayer, DropoutLayer, ReLULa…
yosinski May 1, 2015
9688032
Added missing functions
yosinski May 5, 2015
20a81e5
Merge branch 'master' into deconv-deep-vis-toolbox
yosinski Feb 22, 2016
c1c559c
Don't force datum.label=0 in array_to_datum
lukeyeager Feb 29, 2016
3046be6
Merge branch 'master' into deconv-deep-vis-toolbox
yosinski Mar 9, 2016
542d216
Update Makefile: Changed MKL_DIR to MKLROOT
jreniecki Mar 15, 2016
26961f4
Merge branch 'master' into deconv-deep-vis-toolbox-up
yosinski Mar 16, 2016
55cda91
Merged in latest Caffe master and refactored and cleaned up Deconv vs…
yosinski Mar 16, 2016
f19f0f1
Merge pull request #3821 from jreniecki/mkl
shelhamer Mar 29, 2016
7a81836
Use lazy initialization to reuse orderd dict/list creations to save t…
Mar 30, 2016
389db96
Merge pull request #3891 from danielgordon10/pycaffe-multi-instantiat…
longjon Apr 1, 2016
dee01c8
test_net.cpp: add TestForcePropagateDown
jeffdonahue Apr 4, 2016
77cde9c
Net: setting `propagate_down: true` forces backprop
jeffdonahue Jan 27, 2016
843575e
Merge pull request #3942 from jeffdonahue/propagate-down-true
jeffdonahue Apr 5, 2016
3c3dc95
Solving issue with exp layer with base e
Apr 8, 2016
d21772c
Merge pull request #3937 from emaggiori/exp
jeffdonahue Apr 8, 2016
09130ce
Fix protobuf message generation
tpwrules Apr 11, 2016
219532f
Fix typo in help text for "-model" option
mnogu Apr 12, 2016
857eb24
Merge pull request #3982 from mnogu/fix-typo-model-option
shelhamer Apr 13, 2016
b265134
[docs] install: CUDA 7+ and cuDNN v4 compatible
shelhamer Apr 13, 2016
462a688
[docs] install: include latest versions and platforms, highlight guides
shelhamer Apr 13, 2016
0ef5918
[docs] install: be more firm about compute capability >= 3.0
shelhamer Apr 14, 2016
b916450
[docs] install: include more lab tested hardware
shelhamer Apr 14, 2016
ae5343d
Merge pull request #3988 from shelhamer/install-docs
shelhamer Apr 14, 2016
e867e60
[test] CropLayer: test dimensions check to reveal bounds checking bug
shelhamer Apr 15, 2016
75b0d40
[fix] CropLayer: check dimension bounds only for cropped dimensions
shelhamer Apr 15, 2016
00dc3d1
CropLayer: groom comments
shelhamer Apr 15, 2016
8c66fa5
Merge pull request #3993 from shelhamer/fix-crop
shelhamer Apr 15, 2016
1c49130
Allow the python layer have attribute "phase"
ZhouYzzz Apr 15, 2016
458928a
Typo in docs/installation.md
lukeyeager Apr 18, 2016
045e5ba
Merge pull request #4007 from lukeyeager/bvlc/docs-typo
shelhamer Apr 19, 2016
bd76210
Explicitly point out -weights flag in tutorial
achalddave Apr 20, 2016
5166583
Merge pull request #3749 from lukeyeager/bvlc/array_to_datum-default-…
shelhamer Apr 20, 2016
faba632
Merge pull request #4024 from achalddave/finetune-flickr-style-tutorial
shelhamer Apr 20, 2016
9042664
Don't set map_size=1TB in util/db_lmdb
lukeyeager Feb 26, 2016
f30c61c
Print to stderr for example LMDB code
lukeyeager Feb 26, 2016
74040cb
Update MNIST example to use new DB classes
lukeyeager Feb 26, 2016
bff14b4
Fixed #4029: test the network every 500 iterations, not 1000 iterations
DaiYingjie Apr 23, 2016
0e145c5
Read the data as a binary
ebadawy Apr 24, 2016
d8e2f05
Merge pull request #3731 from lukeyeager/lmdb-map-full
shelhamer Apr 25, 2016
c6d93da
Merge pull request #4033 from HeGaoYuan/master
jeffdonahue Apr 26, 2016
8619fbb
fixed typo in download script command cpp_classification
samster25 Apr 27, 2016
859cf6e
Fix an error in the example of ReshapeParameter.
wk910930 Apr 27, 2016
886563b
Merge pull request #4051 from samster25/master
shelhamer Apr 27, 2016
8714b53
avoid non-integer array indices
drewabbot Apr 28, 2016
673e8cf
Suppress boost registration warnings in pycaffe (Based on #3960)
Apr 28, 2016
f623d04
Merge pull request #4069 from seanbell/pycaffe-boost-warnings-fix
shelhamer Apr 28, 2016
2da8600
draw_net: accept prototxt without name
mnogu Apr 29, 2016
cb3c992
fix grep in CUDA version detection to accomodate OSX's grep (and othe…
Apr 30, 2016
b86b0ae
Merge pull request #4071 from mnogu/optional-name
ajtulloch May 1, 2016
5d423b7
Pin the base image version for the GPU Dockerfile
flx42 May 2, 2016
9882ca1
Merge pull request #4075 from szha/osx_makefile_fix
longjon May 4, 2016
d7196bb
Merge pull request #4065 from drewabbot/master
longjon May 4, 2016
cb493ef
Merge pull request #4056 from wk910930/fix-ReshapeParameter-example
jeffdonahue May 4, 2016
2f7fc59
Merge pull request #4040 from ebadawy/master
longjon May 4, 2016
6f5f7c5
Merge pull request #3977 from tpwrules/master
longjon May 4, 2016
c2dba92
Add test for attribute "phase" in python layer
ZhouYzzz May 4, 2016
5acc17a
Exit on error and report argument error details.
achalddave May 4, 2016
4f22fce
Remove trailing spaces
achalddave May 4, 2016
938918c
Reformat to fit in 79 columns
achalddave May 4, 2016
c2656f0
Fix typo (indecies->indices)
achalddave May 4, 2016
f467ead
Merge pull request #4082 from flx42/pin_base_docker_image
shelhamer May 4, 2016
de8ac32
Merge pull request #3995 from ZhouYzzz/python-phase
longjon May 4, 2016
3d41c8a
Merge pull request #4048 from achalddave/python_plot_exit_properly
longjon May 4, 2016
e6fc797
[build] note that `make clean` clears build and distribute dirs
shelhamer May 4, 2016
af04325
Merge pull request #4094 from shelhamer/make-clean-clears-distribute
shelhamer May 4, 2016
c419f85
add parameter layer for learning any bottom
longjon Jul 9, 2015
4e690b2
fix problems in net_surgery.ipynb
crazytan Apr 28, 2016
da004d7
Allow reshaping blobs to size 0.
erictzeng May 6, 2016
14dc012
Merge pull request #4101 from erictzeng/reshape_zero
jeffdonahue May 6, 2016
c6bd853
Merge pull request #2079 from longjon/parameter-layer
longjon May 9, 2016
4264293
Catch MDB_MAP_FULL errors from mdb_txn_commit
lukeyeager May 9, 2016
d242564
Merge pull request #4117 from lukeyeager/bvlc/fix-lmdb-pr
shelhamer May 9, 2016
a934ca5
[build] (CMake) customisable Caffe version/soversion
rayglover-ibm May 10, 2016
9d239eb
Merge pull request #4121 from rayglover-ibm/cmake
shelhamer May 10, 2016
bb6ca47
a comment misses a space char
gdh1995 May 11, 2016
9c46289
Merge pull request #4128 from gdh1995/master
longjon May 12, 2016
078d998
fixed typo in io.py
millskyle May 13, 2016
87c9dc3
Fix Makefile CUDA_VERSION extraction on OSX Yosemite
May 13, 2016
bb0c1a5
Merge pull request #4144 from millskyle/python_io_typo
shelhamer May 13, 2016
e8ec9f8
add check for background and foreground window size > 0 in WindowData…
bobpoekert May 14, 2016
b43c8e4
Add cuDNN v5 support, drop cuDNN v3 support
flx42 May 16, 2016
8730b14
Update Dockerfile to cuDNN v5
flx42 May 16, 2016
1c3af70
Update supported cuDNN version in the documentation
flx42 May 16, 2016
7cf3538
Merge pull request #4159 from flx42/cudnn_v5_support
shelhamer May 17, 2016
5f21dad
Merge pull request #4148 from bobpoekert/window_data_nonzero_check
longjon May 17, 2016
0acb8db
Merge pull request #4146 from yalesong/fix-makefile-osx-yosemite
longjon May 17, 2016
8523f5d
Merge pull request #4070 from crazytan/ipython
longjon May 17, 2016
a8cc860
handle image names with spaces
crazytan Apr 27, 2016
e79bc8f
Merge pull request #4059 from crazytan/master
longjon May 18, 2016
4bf4b18
Overhaul TravisCI
lukeyeager May 24, 2016
5a45b87
Merge pull request #4207 from lukeyeager/bvlc/travis-overhaul
shelhamer May 25, 2016
2687932
Remove misleading comment from a test file
lukeyeager May 25, 2016
923e7e8
Merge pull request #4214 from lukeyeager/bvlc/remove-comment-in-tests
shelhamer May 26, 2016
ba34926
Merge branch 'master' into deconv-deep-vis-toolbox
asanakoy Jun 22, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 34 additions & 24 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,40 +1,50 @@
# Use a build matrix to do two builds in parallel:
# one using CMake, and one using make.
dist: trusty
sudo: required

language: cpp
compiler: gcc

env:
global:
- NUM_THREADS=4
matrix:
- WITH_CUDA=false WITH_CMAKE=false WITH_IO=true
- WITH_CUDA=false WITH_CMAKE=true WITH_IO=true PYTHON_VERSION=3
- WITH_CUDA=true WITH_CMAKE=false WITH_IO=true
- WITH_CUDA=true WITH_CMAKE=true WITH_IO=true
- WITH_CUDA=false WITH_CMAKE=false WITH_IO=false
- WITH_CUDA=false WITH_CMAKE=true WITH_IO=false PYTHON_VERSION=3
# Use a build matrix to test many builds in parallel
# envvar defaults:
# WITH_CMAKE: false
# WITH_PYTHON3: false
# WITH_IO: true
# WITH_CUDA: false
# WITH_CUDNN: false
- BUILD_NAME="default-make"
# - BUILD_NAME="python3-make" WITH_PYTHON3=true
- BUILD_NAME="no-io-make" WITH_IO=false
- BUILD_NAME="cuda-make" WITH_CUDA=true
- BUILD_NAME="cudnn-make" WITH_CUDA=true WITH_CUDNN=true

language: cpp
- BUILD_NAME="default-cmake" WITH_CMAKE=true
- BUILD_NAME="python3-cmake" WITH_CMAKE=true WITH_PYTHON3=true
- BUILD_NAME="no-io-cmake" WITH_CMAKE=true WITH_IO=false
- BUILD_NAME="cuda-cmake" WITH_CMAKE=true WITH_CUDA=true
- BUILD_NAME="cudnn-cmake" WITH_CMAKE=true WITH_CUDA=true WITH_CUDNN=true

# Cache Ubuntu apt packages.
cache:
apt: true
directories:
- /home/travis/miniconda
- /home/travis/miniconda2
- /home/travis/miniconda3

compiler: gcc

before_install:
- export NUM_THREADS=4
- export SCRIPTS=./scripts/travis
- export CONDA_DIR="/home/travis/miniconda$PYTHON_VERSION"
- source ./scripts/travis/defaults.sh

install:
- sudo -E $SCRIPTS/travis_install.sh
- sudo -E ./scripts/travis/install-deps.sh
- ./scripts/travis/setup-venv.sh ~/venv
- source ~/venv/bin/activate
- ./scripts/travis/install-python-deps.sh

before_script:
- export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/cuda/lib64:$CONDA_DIR/lib
- export PATH=$CONDA_DIR/bin:$PATH
- if ! $WITH_CMAKE; then $SCRIPTS/travis_setup_makefile_config.sh; fi
- ./scripts/travis/configure.sh

script: $SCRIPTS/travis_build_and_test.sh
script:
- ./scripts/travis/build.sh
- ./scripts/travis/test.sh

notifications:
# Emails are sent to the committer's git-configured email address by default,
Expand Down
4 changes: 2 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ endif()
project(Caffe C CXX)

# ---[ Caffe version
set(CAFFE_TARGET_VERSION "1.0.0-rc3")
set(CAFFE_TARGET_SOVERSION "1.0.0-rc3")
set(CAFFE_TARGET_VERSION "1.0.0-rc3" CACHE STRING "Caffe logical version")
set(CAFFE_TARGET_SOVERSION "1.0.0-rc3" CACHE STRING "Caffe soname version")
add_definitions(-DCAFFE_VERSION=${CAFFE_TARGET_VERSION})

# ---[ Using cmake scripts and modules
Expand Down
8 changes: 4 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ endif
ifeq ($(OSX), 1)
CXX := /usr/bin/clang++
ifneq ($(CPU_ONLY), 1)
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release \d' | grep -o '\d')
CUDA_VERSION := $(shell $(CUDA_DIR)/bin/nvcc -V | grep -o 'release [0-9.]*' | tr -d '[a-z ]')
ifeq ($(shell echo | awk '{exit $(CUDA_VERSION) < 7.0;}'), 1)
CXXFLAGS += -stdlib=libstdc++
LINKFLAGS += -stdlib=libstdc++
Expand Down Expand Up @@ -364,9 +364,9 @@ ifeq ($(BLAS), mkl)
# MKL
LIBRARIES += mkl_rt
COMMON_FLAGS += -DUSE_MKL
MKL_DIR ?= /opt/intel/mkl
BLAS_INCLUDE ?= $(MKL_DIR)/include
BLAS_LIB ?= $(MKL_DIR)/lib $(MKL_DIR)/lib/intel64
MKLROOT ?= /opt/intel/mkl
BLAS_INCLUDE ?= $(MKLROOT)/include
BLAS_LIB ?= $(MKLROOT)/lib $(MKLROOT)/lib/intel64
else ifeq ($(BLAS), open)
# OpenBLAS
LIBRARIES += openblas
Expand Down
1 change: 1 addition & 0 deletions Makefile.config.example
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

Expand Down
2 changes: 1 addition & 1 deletion docker/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ docker_files: standalone_files

standalone_files: standalone/cpu/Dockerfile standalone/gpu/Dockerfile

FROM_GPU = "nvidia/cuda:cudnn"
FROM_GPU = "nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04"
FROM_CPU = "ubuntu:14.04"
GPU_CMAKE_ARGS = -DUSE_CUDNN=1
CPU_CMAKE_ARGS = -DCPU_ONLY=1
Expand Down
2 changes: 1 addition & 1 deletion docker/standalone/gpu/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvidia/cuda:cudnn
FROM nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04
MAINTAINER caffe-maint@googlegroups.com

RUN apt-get update && apt-get install -y --no-install-recommends \
Expand Down
32 changes: 19 additions & 13 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,23 @@ title: Installation
# Installation

Prior to installing, have a glance through this guide and take note of the details for your platform.
We install and run Caffe on Ubuntu 14.04 and 12.04, OS X 10.10 / 10.9 / 10.8, and AWS.
The official Makefile and `Makefile.config` build are complemented by an automatic CMake build from the community.
We install and run Caffe on Ubuntu 16.04–12.04, OS X 10.11–10.8, and through Docker and AWS.
The official Makefile and `Makefile.config` build are complemented by a [community CMake build](#cmake-build).

**Step-by-step Instructions**:

- [Docker setup](https://github.com/BVLC/caffe/tree/master/docker) *out-of-the-box brewing*
- [Ubuntu installation](install_apt.html) *the standard platform*
- [OS X installation](install_osx.html)
- [RHEL / CentOS / Fedora installation](install_yum.html)
- [Windows](https://github.com/BVLC/caffe/tree/windows) *see the Windows branch led by Microsoft*
- [OpenCL](https://github.com/BVLC/caffe/tree/opencl) *see the OpenCL branch led by Fabian Tschopp*

**Overview**:

- [Prerequisites](#prerequisites)
- [Compilation](#compilation)
- [Hardware](#hardware)
- Platforms: [Ubuntu guide](install_apt.html), [OS X guide](install_osx.html), and [RHEL / CentOS / Fedora guide](install_yum.html)

When updating Caffe, it's best to `make clean` before re-compiling.

Expand All @@ -20,7 +30,7 @@ When updating Caffe, it's best to `make clean` before re-compiling.
Caffe has several dependencies:

* [CUDA](https://developer.nvidia.com/cuda-zone) is required for GPU mode.
* library version 7.0 and the latest driver version are recommended, but 6.* is fine too
* library version 7+ and the latest driver version are recommended, but 6.* is fine too
* 5.5, and 5.0 are compatible but considered legacy
* [BLAS](http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) via ATLAS, MKL, or OpenBLAS.
* [Boost](http://www.boost.org/) >= 1.55
Expand All @@ -30,14 +40,14 @@ Optional dependencies:

* [OpenCV](http://opencv.org/) >= 2.4 including 3.0
* IO libraries: `lmdb`, `leveldb` (note: leveldb requires `snappy`)
* cuDNN for GPU acceleration (v3)
* cuDNN for GPU acceleration (v5)

Pycaffe and Matcaffe interfaces have their own natural needs.

* For Python Caffe: `Python 2.7` or `Python 3.3+`, `numpy (>= 1.7)`, boost-provided `boost.python`
* For MATLAB Caffe: MATLAB with the `mex` compiler.

**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. The current version is cuDNN v3; older versions are supported in older Caffe.
**cuDNN Caffe**: for fastest operation Caffe is accelerated by drop-in integration of [NVIDIA cuDNN](https://developer.nvidia.com/cudnn). To speed up your Caffe models, install cuDNN then uncomment the `USE_CUDNN := 1` flag in `Makefile.config` when installing Caffe. Acceleration is automatic. The current version is cuDNN v5; older versions are supported in older Caffe.

**CPU-only Caffe**: for cold-brewed CPU-only Caffe uncomment the `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA. This is helpful for cloud or cluster deployment.

Expand Down Expand Up @@ -82,10 +92,6 @@ Install MATLAB, and make sure that its `mex` is in your `$PATH`.

*Caffe's MATLAB interface works with versions 2015a, 2014a/b, 2013a/b, and 2012b.*

#### Windows

There is an unofficial Windows port of Caffe at [niuzhiheng/caffe:windows](https://github.com/niuzhiheng/caffe). Thanks [@niuzhiheng](https://github.com/niuzhiheng)!

## Compilation

Caffe can be compiled with either Make or CMake. Make is officially supported while CMake is supported by the community.
Expand Down Expand Up @@ -113,7 +119,7 @@ Be sure to set your MATLAB and Python paths in `Makefile.config` first!

Now that you have installed Caffe, check out the [MNIST tutorial](gathered/examples/mnist.html) and the [reference ImageNet model tutorial](gathered/examples/imagenet.html).

### Compilation with CMake
### CMake Build

In lieu of manually editing `Makefile.config` to configure the build, Caffe offers an unofficial CMake build thanks to @Nerei, @akosiorek, and other members of the community. It requires CMake version >= 2.8.7.
The basic steps are as follows:
Expand All @@ -129,9 +135,9 @@ See [PR #1667](https://github.com/BVLC/caffe/pull/1667) for options and details.

## Hardware

**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with K40s, K20s, and Titans including models at ImageNet/ILSVRC scale. We also run on GTX series cards (980s and 770s) and GPU-equipped MacBook Pros. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.
**Laboratory Tested Hardware**: Berkeley Vision runs Caffe with Titan Xs, K80s, GTX 980s, K40s, K20s, Titans, and GTX 770s including models at ImageNet/ILSVRC scale. We have not encountered any trouble in-house with devices with CUDA capability >= 3.0. All reported hardware issues thus-far have been due to GPU configuration, overheating, and the like.

**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Your mileage may vary.
**CUDA compute capability**: devices with compute capability <= 2.0 may have to reduce CUDA thread numbers and batch sizes due to hardware constraints. Brew with caution; we recommend compute capability >= 3.0.

Once installed, check your times against our [reference performance numbers](performance_hardware.html) to make sure everything is configured properly.

Expand Down
2 changes: 2 additions & 0 deletions examples/cifar10/convert_cifar_data.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,8 @@ void convert_dataset(const string& input_folder, const string& output_folder,
}

int main(int argc, char** argv) {
FLAGS_alsologtostderr = 1;

if (argc != 4) {
printf("This script converts the CIFAR dataset to the leveldb format used\n"
"by caffe to perform classification.\n"
Expand Down
2 changes: 1 addition & 1 deletion examples/cpp_classification/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ script:
The ImageNet labels file (also called the *synset file*) is also
required in order to map a prediction to the name of the class:
```
./data/ilsvrc12/get_ilsvrc_aux.sh.
./data/ilsvrc12/get_ilsvrc_aux.sh
```
Using the files that were downloaded, we can classify the provided cat
image (`examples/images/cat.jpg`) using this command:
Expand Down
6 changes: 5 additions & 1 deletion examples/finetune_flickr_style/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,11 @@ The prototxts in this example assume this, and also assume the presence of the I

We'll also need the ImageNet-trained model, which you can obtain by running `./scripts/download_model_binary.py models/bvlc_reference_caffenet`.

Now we can train! (You can fine-tune in CPU mode by leaving out the `-gpu` flag.)
Now we can train! The key to fine-tuning is the `-weights` argument in the
command below, which tells Caffe that we want to load weights from a pre-trained
Caffe model.

(You can fine-tune in CPU mode by leaving out the `-gpu` flag.)

caffe % ./build/tools/caffe train -solver models/finetune_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu 0

Expand Down
Binary file added examples/images/cat gray.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
89 changes: 14 additions & 75 deletions examples/mnist/convert_mnist_data.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,15 @@
#include <fstream> // NOLINT(readability/streams)
#include <string>

#include "boost/scoped_ptr.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/db.hpp"
#include "caffe/util/format.hpp"

#if defined(USE_LEVELDB) && defined(USE_LMDB)

using namespace caffe; // NOLINT(build/namespaces)
using boost::scoped_ptr;
using std::string;

DEFINE_string(backend, "lmdb", "The backend for storing the result");
Expand Down Expand Up @@ -67,43 +70,10 @@ void convert_dataset(const char* image_filename, const char* label_filename,
image_file.read(reinterpret_cast<char*>(&cols), 4);
cols = swap_endian(cols);

// lmdb
MDB_env *mdb_env;
MDB_dbi mdb_dbi;
MDB_val mdb_key, mdb_data;
MDB_txn *mdb_txn;
// leveldb
leveldb::DB* db;
leveldb::Options options;
options.error_if_exists = true;
options.create_if_missing = true;
options.write_buffer_size = 268435456;
leveldb::WriteBatch* batch = NULL;

// Open db
if (db_backend == "leveldb") { // leveldb
LOG(INFO) << "Opening leveldb " << db_path;
leveldb::Status status = leveldb::DB::Open(
options, db_path, &db);
CHECK(status.ok()) << "Failed to open leveldb " << db_path
<< ". Is it already existing?";
batch = new leveldb::WriteBatch();
} else if (db_backend == "lmdb") { // lmdb
LOG(INFO) << "Opening lmdb " << db_path;
CHECK_EQ(mkdir(db_path, 0744), 0)
<< "mkdir " << db_path << "failed";
CHECK_EQ(mdb_env_create(&mdb_env), MDB_SUCCESS) << "mdb_env_create failed";
CHECK_EQ(mdb_env_set_mapsize(mdb_env, 1099511627776), MDB_SUCCESS) // 1TB
<< "mdb_env_set_mapsize failed";
CHECK_EQ(mdb_env_open(mdb_env, db_path, 0, 0664), MDB_SUCCESS)
<< "mdb_env_open failed";
CHECK_EQ(mdb_txn_begin(mdb_env, NULL, 0, &mdb_txn), MDB_SUCCESS)
<< "mdb_txn_begin failed";
CHECK_EQ(mdb_open(mdb_txn, NULL, 0, &mdb_dbi), MDB_SUCCESS)
<< "mdb_open failed. Does the lmdb already exist? ";
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}

scoped_ptr<db::DB> db(db::GetDB(db_backend));
db->Open(db_path, db::NEW);
scoped_ptr<db::Transaction> txn(db->NewTransaction());

// Storing to db
char label;
Expand All @@ -125,59 +95,28 @@ void convert_dataset(const char* image_filename, const char* label_filename,
string key_str = caffe::format_int(item_id, 8);
datum.SerializeToString(&value);

// Put in db
if (db_backend == "leveldb") { // leveldb
batch->Put(key_str, value);
} else if (db_backend == "lmdb") { // lmdb
mdb_data.mv_size = value.size();
mdb_data.mv_data = reinterpret_cast<void*>(&value[0]);
mdb_key.mv_size = key_str.size();
mdb_key.mv_data = reinterpret_cast<void*>(&key_str[0]);
CHECK_EQ(mdb_put(mdb_txn, mdb_dbi, &mdb_key, &mdb_data, 0), MDB_SUCCESS)
<< "mdb_put failed";
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}
txn->Put(key_str, value);

if (++count % 1000 == 0) {
// Commit txn
if (db_backend == "leveldb") { // leveldb
db->Write(leveldb::WriteOptions(), batch);
delete batch;
batch = new leveldb::WriteBatch();
} else if (db_backend == "lmdb") { // lmdb
CHECK_EQ(mdb_txn_commit(mdb_txn), MDB_SUCCESS)
<< "mdb_txn_commit failed";
CHECK_EQ(mdb_txn_begin(mdb_env, NULL, 0, &mdb_txn), MDB_SUCCESS)
<< "mdb_txn_begin failed";
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}
txn->Commit();
}
}
// write the last batch
if (count % 1000 != 0) {
if (db_backend == "leveldb") { // leveldb
db->Write(leveldb::WriteOptions(), batch);
delete batch;
delete db;
} else if (db_backend == "lmdb") { // lmdb
CHECK_EQ(mdb_txn_commit(mdb_txn), MDB_SUCCESS) << "mdb_txn_commit failed";
mdb_close(mdb_env, mdb_dbi);
mdb_env_close(mdb_env);
} else {
LOG(FATAL) << "Unknown db backend " << db_backend;
}
LOG(ERROR) << "Processed " << count << " files.";
txn->Commit();
}
LOG(INFO) << "Processed " << count << " files.";
delete[] pixels;
db->Close();
}

int main(int argc, char** argv) {
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif

FLAGS_alsologtostderr = 1;

gflags::SetUsageMessage("This script converts the MNIST dataset to\n"
"the lmdb/leveldb format used by Caffe to load data.\n"
"Usage:\n"
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ These messages tell you the details about each layer, its connections and its ou
I1203 solver.cpp:36] Solver scaffolding done.
I1203 solver.cpp:44] Solving LeNet

Based on the solver setting, we will print the training loss function every 100 iterations, and test the network every 1000 iterations. You will see messages like this:
Based on the solver setting, we will print the training loss function every 100 iterations, and test the network every 500 iterations. You will see messages like this:

I1203 solver.cpp:204] Iteration 100, lr = 0.00992565
I1203 solver.cpp:66] Iteration 100, loss = 0.26044
Expand Down
Loading