search query: @keyword deep learning / total: 9
reference: 9 / 9
« previous | next »
Author:Cho, KyungHyun
Title:Improved learning algorithms for restricted Boltzmann machines
Publication type:Master's thesis
Publication year:2011
Pages:xii + 78 s. + liitt. 6      Language:   eng
Department/School:Tietotekniikan laitos
Main subject:Informaatiotekniikka   (T-61)
Supervisor:Karhunen, Juha
Instructor:Ilin, Alexander ; Raiko, Tapani
Electronic version URL: http://urn.fi/URN:NBN:fi:aalto-201207022632
OEVS:
Electronic archive copy is available via Aalto Thesis Database.
Instructions

Reading digital theses in the closed network of the Aalto University Harald Herlin Learning Centre

In the closed network of Learning Centre you can read digital and digitized theses not available in the open network.

The Learning Centre contact details and opening hours: https://learningcentre.aalto.fi/en/harald-herlin-learning-centre/

You can read theses on the Learning Centre customer computers, which are available on all floors.

Logging on to the customer computers

  • Aalto University staff members log on to the customer computer using the Aalto username and password.
  • Other customers log on using a shared username and password.

Opening a thesis

  • On the desktop of the customer computers, you will find an icon titled:

    Aalto Thesis Database

  • Click on the icon to search for and open the thesis you are looking for from Aaltodoc database. You can find the thesis file by clicking the link on the OEV or OEVS field.

Reading the thesis

  • You can either print the thesis or read it on the customer computer screen.
  • You cannot save the thesis file on a flash drive or email it.
  • You cannot copy text or images from the file.
  • You cannot edit the file.

Printing the thesis

  • You can print the thesis for your personal study or research use.
  • Aalto University students and staff members may print black-and-white prints on the PrintingPoint devices when using the computer with personal Aalto username and password. Color printing is possible using the printer u90203-psc3, which is located near the customer service. Color printing is subject to a charge to Aalto University students and staff members.
  • Other customers can use the printer u90203-psc3. All printing is subject to a charge to non-University members.
Location:P1 Ark Aalto  7070   | Archive
Keywords:Boltzmann machine
restricted Boltzmann machine
annealed importance sampling
paraller tempering
enhanced gradient
adaptive learning rate
Gaussian-Bernoulli restricted Boltzmann machine
deep learning
Abstract (eng): A restricted Boltzmann machine (RBM) is often used as a building block for constructing deep neural networks and deep generative models which have gained popularity recently as one way to learn complex and large probabilistic models.
In these deep models, it is generally known that the layer-wise pretraining of RBMs facilitates finding a more accurate model for the data.
It is, hence, important to have an efficient learning method for RBM.

The conventional learning is mostly performed using the stochastic gradients, often, with the approximate method such as contrastive divergence (CD) learning to overcome the computational difficulty.
Unfortunately, training RBMs with this approach is known to be difficult, as learning easily diverges after initial convergence.
This difficulty has been reported recently by many researchers.

This thesis contributes important improvements that address the difficulty of training RBMs.

Based on an advanced Markov-Chain Monte-Carlo sampling method called parallel tempering (PT), the thesis proposes a PT learning which can replace CD learning.
In terms of both the learning performance and the computational overhead, PT learning is shown to be superior to CD learning through various experiments.
The thesis also tackles the problem of choosing the right learning parameter by proposing a new algorithm, the adaptive learning rate, which is able to automatically choose the right learning rate during learning.

A closer observation into the update rules suggested that learning by the traditional update rules is easily distracted depending on the representation of data sets.
Based on this observation, the thesis proposes a new set of gradient update rules that are more robust to the representation of training data sets and the learning parameters.
Extensive experiments on various data sets confirmed that the proposed rules indeed improve learning significantly.

Additionally, a Gaussian-Bernoulli RBM (GBRBM) which is a variant of an RBM that can learn continuous real-valued data sets is reviewed, and the proposed improvements are tested upon it.
The experiments showed that the improvements could also be made for GBRBMs.
ED:2011-05-05
INSSI record number: 41637
+ add basket
« previous | next »
INSSI