Forum Stats

  • 3,760,215 Users
  • 2,251,665 Discussions
  • 7,871,023 Comments

Discussions

ore.neural() customisation for deep learning - the documentation misses details on customisation

Bilal
Bilal Member Posts: 494 Bronze Badge
edited Jul 18, 2017 12:26PM in R Technologies

Hi All,

I am developing a neural network based deep learning for a part of our Oracle-based application. By Oracle-based I mean it used Oracle database and Oracle JDeveloper as the underpinning technologies. Personally, I would love to see if Oracle has some native support for enhancing ore.neural for deep learning. This will greatly improve our integration efforts due to ORE support for SQL and PLSQL.

So, coming back to the point, I have successfully developed the model using Keras in R, and the results are promising. For those who dont know Keras, it is the deep learning library for Python and R. Now, the next step is to deploy the model in the database. I am just wondering if ore.neural supports the implementation of deep models where more hidden layers could be defined and the architecture of the network could be customised as needed. Currently, the online documentation lacks such details.

I need the guidance on how to customise the network for activation functions, optimizers, and other hyperparameters while compiling and fitting the deep learning model using ore.neural. I have checked the online ORE documentation, but these details are missing.

Can someone guide me to the appropriate link where I can find all these stuff?

Any guidance will be highly appreciated.

Many Thanks and

Kind Regards,

Bilal

Christos Iraklis TsatsoulisBilalSherry Lamonica-Oracle

Best Answer

  • Marcos Arancibia-Oracle
    Marcos Arancibia-Oracle Member Posts: 2
    edited Jul 14, 2017 8:02PM Accepted Answer

    Hi Bilal,

    One of the best places to read about the Basic or Advanced options for ORE is in the Learning More about ORE Tab on OTN.

    In particular, in there you will find a link to the "Oracle R Enterprise 1.5 Predictive Analytics" document:  http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/learnmore/session5-ore-pa-2889579.pdf 

    Starting on page 136 you have a full chapter just on ore.neural.  Starting on page 144 you can see the different settings for possible Activation Functions, and more options.

    Some of the basic functions you want to use are probably the number of layers, number of neurons per layer, and activation function per layer.  On Page 152 you will find the following example:

    # The following ore.neural creates a Model with 2 Hidden layers (because we have 2 numbers indicating the number of neurons per layer),

    # the first layer with 20 neurons, the second with 5, and the activation functions are Bipolar Sigmoid (bSigmoid) for the first layer,

    # Hyperbolic Tangent (tanh) for the second layer, and 'linear' for the output (which is the default).

    #

    # If only the number of neurons is specifed, the default activation functions for all layers is Bipolar Sigmoid (bSigmoid),

    # and for the output it is Linear

    #

    # For Binary targets, use 'entropy' as the output activation function.

    #

    # Even without specifying hidden layers, one can pass the option activations='entropy' just to change the output activation.

    # Push the sample data to the Oracle Database

    IRIS <- ore.push(iris)

    fit <- ore.neural(Petal.Length ~ Petal.Width + Sepal.Length,

               data = IRIS,

               hiddenSizes = c(20, 5),

               activations = c('bSigmoid', 'tanh', 'linear'))

    print(fit)

    ans <- predict(fit, newdata = IRIS, supplemental.cols = 'Petal.Length')

    localPredictions <- ore.pull(ans)

    # Inspect some predictions

    head(localPredictions)

    # Compute RMSE

    ore.rmse <- function (pred, obs) { sqrt(mean((pred-obs)^2,na.rm=TRUE)) }

    ore.rmse(localPredictions$pred_Petal.Length, localPredictions$Petal.Length)

    Cheers!

    Christos Iraklis Tsatsoulis

Answers

  • Marcos Arancibia-Oracle
    Marcos Arancibia-Oracle Member Posts: 2
    edited Jul 14, 2017 8:02PM Accepted Answer

    Hi Bilal,

    One of the best places to read about the Basic or Advanced options for ORE is in the Learning More about ORE Tab on OTN.

    In particular, in there you will find a link to the "Oracle R Enterprise 1.5 Predictive Analytics" document:  http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/learnmore/session5-ore-pa-2889579.pdf 

    Starting on page 136 you have a full chapter just on ore.neural.  Starting on page 144 you can see the different settings for possible Activation Functions, and more options.

    Some of the basic functions you want to use are probably the number of layers, number of neurons per layer, and activation function per layer.  On Page 152 you will find the following example:

    # The following ore.neural creates a Model with 2 Hidden layers (because we have 2 numbers indicating the number of neurons per layer),

    # the first layer with 20 neurons, the second with 5, and the activation functions are Bipolar Sigmoid (bSigmoid) for the first layer,

    # Hyperbolic Tangent (tanh) for the second layer, and 'linear' for the output (which is the default).

    #

    # If only the number of neurons is specifed, the default activation functions for all layers is Bipolar Sigmoid (bSigmoid),

    # and for the output it is Linear

    #

    # For Binary targets, use 'entropy' as the output activation function.

    #

    # Even without specifying hidden layers, one can pass the option activations='entropy' just to change the output activation.

    # Push the sample data to the Oracle Database

    IRIS <- ore.push(iris)

    fit <- ore.neural(Petal.Length ~ Petal.Width + Sepal.Length,

               data = IRIS,

               hiddenSizes = c(20, 5),

               activations = c('bSigmoid', 'tanh', 'linear'))

    print(fit)

    ans <- predict(fit, newdata = IRIS, supplemental.cols = 'Petal.Length')

    localPredictions <- ore.pull(ans)

    # Inspect some predictions

    head(localPredictions)

    # Compute RMSE

    ore.rmse <- function (pred, obs) { sqrt(mean((pred-obs)^2,na.rm=TRUE)) }

    ore.rmse(localPredictions$pred_Petal.Length, localPredictions$Petal.Length)

    Cheers!

    Christos Iraklis Tsatsoulis
  • Christos Iraklis Tsatsoulis
    Christos Iraklis Tsatsoulis Member Posts: 85 Blue Ribbon
    edited Jul 17, 2017 7:11AM

    Thanks @Marcos Arancibia-Oracle

    Although this is really useful, I would argue that this kind of critical information (e.g. the list of available activation functions) should be available in the core documentation, and not only in tutorial-type Powerpoint presentations.

    @Bilal It should be clear from the above info that currently the only functionality provided in ore.neural() is about fully-connected layers (Dense in Keras lingo) with only the L-BFGS optimizer, in order to build multi-layered perceptrons (MLP). Dropout and other advanced Keras functionality (such as different optimizers) are not available

    BilalBilal
  • Bilal
    Bilal Member Posts: 494 Bronze Badge
    edited Jul 17, 2017 7:46AM

    Hi Marcos, Many thanks for the response. I went through presentation slides and found it really useful. Kind Regards, Bilal

  • Bilal
    Bilal Member Posts: 494 Bronze Badge
    edited Jul 17, 2017 7:51AM

    I will second Christos, that these details must be part of the core ORE documentation. @Macros, thanks for tallying the concepts around Keras and ore.neural. Having ore.neural() with all the advanced features for deep learning is really key for Oracle data mining package. Does Oracle provide this functionality in future? I am eager to have all the deep learning stuff using ore.neural. Thanks again. Kind Regards, Bilal

    Christos Iraklis Tsatsoulis
  • Bilal
    Bilal Member Posts: 494 Bronze Badge
    edited Jul 17, 2017 7:55AM

    I'm just wondering of any possibility where I can provide my custom implementations for activation or loss functions and then use these custom functions in ore.neural() while modelling. 

  • Sherry Lamonica-Oracle
    Sherry Lamonica-Oracle Posts: 437 Employee
    edited Jul 17, 2017 11:09AM

    The help files for R functions are the first place to seek this type of information.  The R Help files follow a fairly standard outline. You find most of the following sections in every R Help file:

    Title: A one-sentence overview of the function.


    Description: An introduction to the high-level objectives of the function, typically about one paragraph long.

    Usage: A description of the syntax of the function (in other words, how the function is called). This is where you find all the arguments that you can supply to the function, as well as any default values of these arguments.

    Arguments: A description of each argument. Usually this includes a specification of the class (for example, character, numeric, list, and so on). This section is an important one to understand, because arguments are frequently a cause of errors in R.

    Details: Extended details about how the function works, provides longer descriptions of the various ways to call the function (if applicable), and a longer discussion of the arguments.

    Value: A description of the class of the value returned by the function.

    See also: Links to other relevant functions. In most of the R editors, you can click these links to read the Help files for these functions.

    Examples: Worked examples of real R code that you can paste into your console and run.

    Typing at the R prompt:

    > help(ore.neural)

    scroll down to see the list of possible activation functions.

    Sherry

    Christos Iraklis TsatsoulisBilalBilal
  • Bilal
    Bilal Member Posts: 494 Bronze Badge
    edited Jul 17, 2017 3:45PM

    Sherry, thank you. This is really useful. I can see the list of activation functions and other details for the ore.neural from R console. One thing, can we provide ore.neural with our own implementation for activation functions?

  • Sherry Lamonica-Oracle
    Sherry Lamonica-Oracle Posts: 437 Employee
    edited Jul 18, 2017 12:17PM

    Hi Bilal,

    Custom activation functions are not currently supported in ore.neural().  I've added an enhancement request to be reviewed for a future ORE release.

    Thanks,
    Sherry

    Bilal
  • Bilal
    Bilal Member Posts: 494 Bronze Badge
    edited Jul 18, 2017 12:26PM

    Hi Sherry, The customisation of the ore.neural() based on requirements of ML problem at hand will be indeed a great feature. I would like to see these kinds of customisation in ORE future releases. Thanks for taking a note of it. Kind Regards, Bilal

    Sherry Lamonica-Oracle
This discussion has been closed.