Discussions
Categories
- 196.8K All Categories
- 2.2K Data
- 239 Big Data Appliance
- 1.9K Data Science
- 450.3K Databases
- 221.7K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 550 MySQL Community Space
- 478 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 544 SQLcl
- 4K SQL Developer Data Modeler
- 187K SQL & PL/SQL
- 21.3K SQL Developer
- 295.8K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.5K Development Tools
- 107 DevOps
- 3.1K QA/Testing
- 646K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 155 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 18 Java Essentials
- 160 Java 8 Questions
- 86K Java Programming
- 80 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 204 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 439 LiveLabs
- 38 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 171 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 232 Portuguese
connecting Java code with nVidia CUDA

I am having difficulty in creating CUDA (C++ code) act as native function for Java:
First:
I wrote a simple matrix multiplication using CUDA (based on parallel threads).
It runs well as an executable. And also, as a shared library (myCUDAlib.so)
, when I call it from a C executable.
Since CUDA is C++, I use
extern "C"
{
int kernelEntry()
{
return kernelMatrixMult();
}
}
to encapsulate the CUDA kernel kernelMatrixMult()
with a C function kernelEntry()
and therefore this becomes my shared C library.
It runs well even for large size matrices, like 1024 x 1024.
==========================================
Next, I tried to let C++ code implement a native function for Java (JNI) which calls the kernel but this does not work.
==========================================
So, I make the C code (which calls the CUDA library) be a shared library instead of executable, and I call it (myClib.so )
It implements a function myJNImethod()
which serves as the implementation of my native method for Java. This function simply calls the function kernelEntry()
(mentioned above) which calls kernelMatrixMult()
that
multiplies the two matrices in CUDA
The aim is to get Java to call the matrix multiplication which is executed by the C++ (CUDA) code.
For this, I wrote a simple Java code that loads up the shared library myClib.so
and then calls the native method that corresponds to the C function myJNImethod()
which is implemented in this library, which as said above, calls the CUDA library.
But this works only for small size matrices (up to 128 x 128). When I try to this Java + CUDA for matrices larger than 128 x 128, I get a segmentation fault.
I therefore suspect that there may be some memory issue.
- Does anyone have some experience with hooking up Java and CUDA via JNI?
- Is there a problem in the way I encapsulate the CUDA code to appear as C library that contains also the C function that implements the native method?
- Is there known memory limitation when using JNI with libraries that are executed on a multi-thread GPU?
I appreciate any leads on this.
Cheers