> During the tests I found that results extraction takes more time than processing
Reading a text file and converting it to a vector using DAX library takes additional time because of copying overhead. We got a request to implement an API, where user has already allocated a buffer that is enough to hold the data being read and DAX library can directly read in the buffer. After we implement the API, the time difference between read using fscanf and DAX library (which also does fscanf internally) would go away. I agree that reading the data and converting from ASCII representation takes a lot of time, much higher than an individual query. For this reason, we have provided an API to read binary data. But still the time for reading the entire file would be much higher than a single query. Our expectation is that for a real application, as opposed to sample applications, the data once read would be scanned many times.
> Is it consequence of vector_extract() implementation in vector.so or the DAX need to do hard work to extract already processed data?
Actually vector_in_range does most of the hard work and generates the bit vector that represents which elements were in range. The vector_extract() was written in a way that favors when lot of elements (say 10%) were in range but not very inefficiently when very few elements are returned.
> And the second question - according to "busstat -w dax" output only 8 of 32 DAX engines are used while processing request. Is it possible to utilize more than 8 DAX engines by one request?
For single threaded programs, you will not see all the DAX engines, however if you run multiple copies of this sample program in a loop (may be choosing a different range each time) you will
see all DAX engines being utilized.
Hope this answered your question. If you have any suggestions/feedback regarding the APIs, please let us know.
Thank you for fast reply.
By slow extraction I meant that vector_in_range() performs faster than vector_extract(). For example:
9.751211 seconds to load 50000000 integers from text file using DAX library
8.347688 seconds to load 50000000 integers from text file using fscanf()
0.045615 seconds to find numbers between -1000000000 and 1000000000 using DAX. There are 23283015 such numbers
0.318128 seconds to find numbers between -1000000000 and 1000000000 using C-cycle. There are 23283015 such numbers
0.107902 seconds to display results using DAX.
2.980213 seconds to display results from array.
"display results using DAX" in this case doing only vector_extract() without any additional things like printf(). So, vector_in_range() takes 0.046 seconds to find 47% of data in range, but vector_extract() on results of vector_in_range() takes 0.108 seconds. More than twice as much. What I want to understand is if it ineffectively written vector_extract() performs slow or DAX doing it slow by design.