Discussions
Categories
- 196.9K All Categories
- 2.2K Data
- 239 Big Data Appliance
- 1.9K Data Science
- 450.3K Databases
- 221.7K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 550 MySQL Community Space
- 478 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 545 SQLcl
- 4K SQL Developer Data Modeler
- 187K SQL & PL/SQL
- 21.3K SQL Developer
- 295.9K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.6K Development Tools
- 107 DevOps
- 3.1K QA/Testing
- 646K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 155 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 18 Java Essentials
- 160 Java 8 Questions
- 86K Java Programming
- 80 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 204 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 440 LiveLabs
- 38 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 171 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 232 Portuguese
DAX offloading isn't offloading cores?

Hello,
According to http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/sparc-t7-m7-server-architecture-2702… "These engines can process 32 independent data streams, offloading the processor cores to do other work."
Let's try to pbind "yes > /dev/null" to all threads of single core and then launch vector_in_range() on one of the threads? One execution of vector_in_range() takes about 80ms (zone memory still not increased, case ID: 497386-1217697831), so we will run it in cycle. What we got is CPU time sharing:
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWP
5334 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5327 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5336 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5330 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5332 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5340 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5338 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 60 68K 0 yes/1
5345 dglushe* 46 14 0.1 0.0 0.0 0.0 0.0 39 0 48 .1M 0 dax-in-range/1
5342 dglushe* 39 0.4 0.0 0.0 0.0 0.0 0.0 61 0 49 24K 0 yes/1
Most of the user time dax-in-range spends in following stack:
libdax.so.1`dax_read_results+0x1c4
libdax_query.so.1`dax_query_execute+0x160
libdax_query.so.1`dax_scan+0xdb4
vector.so`dax_scan+0x12c
vector.so`vectorScanStream+0x1d8
vector.so`vectorFilter+0x418
vector.so`vectorInRange+0x2c
vector.so`vector_in_range+0x24
dax-in-range`main+0x1d0
dax-in-range`_start+0x108
Is it poor vector.so that uses DAX ineffectively or the offloading statement is incorrect? vector_in_range() steals time from "yes > /dev/null".
Thank you.
Best Answer
-
Oh, I see you shared dax.h. It seems that vector.so uses synchronous DAX calls, but dax.h states that there are asynchronous calls too:
/* NAME: dax_post - family of functions that post asynchronous dax requests SYNOPSIS: */ .. dax_status_t dax_scan_range_post(dax_queue_t *queue, uint64_t flags, dax_vec_t *src, dax_vec_t *dst, dax_compare_t op, dax_int_t *lower, dax_int_t *upper, void *udata);
No more questions, thank you!
Answers
-
Hi,
Your Zone now has 16GB of RAM. Please let me know if that is sufficient
-Angelo
-
Thank you. Now memory limit is sufficient.
But CPU offloading still shows that while DAX doing its work - CPU core busy too.
With new memory limit vector_in_range() takes 0,8 seconds to complete. Let's run it in cycle on the same thread id with "yes > /dev/null" (using pbind). Again, we see that yes shares CPU time with dax-in-range:
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWP
19860 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19868 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19864 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19866 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19857 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19862 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19870 dglushe* 99 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0 30 34K 0 yes/1
19880 dglushe* 50 15 0.1 0.0 0.0 0.0 0.0 35 1 23 .1M 0 dax-in-range/1
19850 dglushe* 34 0.3 0.0 0.0 0.0 0.0 0.0 65 0 24 10K 0 yes/1
-
Oh, I see you shared dax.h. It seems that vector.so uses synchronous DAX calls, but dax.h states that there are asynchronous calls too:
/* NAME: dax_post - family of functions that post asynchronous dax requests SYNOPSIS: */ .. dax_status_t dax_scan_range_post(dax_queue_t *queue, uint64_t flags, dax_vec_t *src, dax_vec_t *dst, dax_compare_t op, dax_int_t *lower, dax_int_t *upper, void *udata);
No more questions, thank you!