Discussions
Categories
- 385.5K All Categories
- 4.9K Data
- 2.5K Big Data Appliance
- 2.4K Data Science
- 453.4K Databases
- 223.2K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 47 Multilingual Engine
- 606 MySQL Community Space
- 486 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3.2K ORDS, SODA & JSON in the Database
- 585 SQLcl
- 4K SQL Developer Data Modeler
- 188K SQL & PL/SQL
- 21.5K SQL Developer
- 46 Data Integration
- 46 GoldenGate
- 298.4K Development
- 4 Application Development
- 20 Developer Projects
- 166 Programming Languages
- 295K Development Tools
- 150 DevOps
- 3.1K QA/Testing
- 646.7K Java
- 37 Java Learning Subscription
- 37.1K Database Connectivity
- 201 Java Community Process
- 108 Java 25
- 22.2K Java APIs
- 138.3K Java Development Tools
- 165.4K Java EE (Java Enterprise Edition)
- 22 Java Essentials
- 176 Java 8 Questions
- 86K Java Programming
- 82 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 208 Java User Groups
- 25 JavaScript - Nashorn
- Programs
- 667 LiveLabs
- 41 Workshops
- 10.3K Software
- 6.7K Berkeley DB Family
- 3.6K JHeadstart
- 6K Other Languages
- 2.3K Chinese
- 207 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 474 Portuguese
connecting Java code with nVidia CUDA

I am having difficulty in creating CUDA (C++ code) act as native function for Java:
First:
I wrote a simple matrix multiplication using CUDA (based on parallel threads).
It runs well as an executable. And also, as a shared library (myCUDAlib.so)
, when I call it from a C executable.
Since CUDA is C++, I use
extern "C"
{
int kernelEntry()
{
return kernelMatrixMult();
}
}
to encapsulate the CUDA kernel kernelMatrixMult()
with a C function kernelEntry()
and therefore this becomes my shared C library.
It runs well even for large size matrices, like 1024 x 1024.
==========================================
Next, I tried to let C++ code implement a native function for Java (JNI) which calls the kernel but this does not work.
==========================================
So, I make the C code (which calls the CUDA library) be a shared library instead of executable, and I call it (myClib.so )
It implements a function myJNImethod()
which serves as the implementation of my native method for Java. This function simply calls the function kernelEntry()
(mentioned above) which calls kernelMatrixMult()
that
multiplies the two matrices in CUDA
The aim is to get Java to call the matrix multiplication which is executed by the C++ (CUDA) code.
For this, I wrote a simple Java code that loads up the shared library myClib.so
and then calls the native method that corresponds to the C function myJNImethod()
which is implemented in this library, which as said above, calls the CUDA library.
But this works only for small size matrices (up to 128 x 128). When I try to this Java + CUDA for matrices larger than 128 x 128, I get a segmentation fault.
I therefore suspect that there may be some memory issue.
- Does anyone have some experience with hooking up Java and CUDA via JNI?
- Is there a problem in the way I encapsulate the CUDA code to appear as C library that contains also the C function that implements the native method?
- Is there known memory limitation when using JNI with libraries that are executed on a multi-thread GPU?
I appreciate any leads on this.
Cheers