Welcome to the Oracle Analytics Community

Welcome to the Oracle Analytics Community: Please complete your User Profile and upload your Profile Picture

User743400-Oracle Rank 3 - Community Apprentice

Comments

  • Both the design and runtime data is stored in a file based relational database called Java DB in datasync directory. The Data Sync UI is the only way to access the data. Consider setting email notifications (use 2.6.1) which will send out some stats about the run at the end of the job.
  • DATE column does contain the timestamp part upto seconds. DATE does not store milliseconds, or timezone information. Having said that, the incremental dataloads using Data Sync should not have any limits when it comes to # rows. It does store the last refresh timestamp in a date field, so its accurate to the second of the…
  • All the metadata for Data Sync is stored in a file based relational database called JAVADB. 
  • There are two components in Data Sync - a server part and a client part. datasync.bat/.sh is a wrapper to bring both the components together. startserver.bat/.sh is to bring up only the backend - this is the part that runs jobs, maintains the schedules etc. datasyncClient.bat is to bring up only the client component. this…
  • There are two components in Data Sync - a server part and a client part. datasync.bat/.sh is a wrapper to bring both the components together. startserver.bat/.sh is to bring up only the backend - this is the part that runs jobs, maintains the schedules etc. datasyncClient.bat is to bring up only the client component. this…
  • Upgrading to a newer version requires the password to the old environment to be reentered. If you dont have the password, consider using resetdatasyncpassword.bat. Run it from a command prompt, and you will be required to answer at least two datasources passwords as defined in the datasync repository to set a new password.…
  • You dont have to regenerate, this should be automatic runtime behavior. I just tried it with 2.6. It works. Sorry, I should have mentioned this before... Go to Views->System Properties. Set the Server Log Level to FINEST. Rerun the job. This information is printed in the FINE log level. Inspect the log file. You should see…
  • Jobs->Runs->Tasks subtab. Choose the row you are interested. Click on details. Choose the row that reads INSERT_UPDATE_DATACOPY. Double click on Additional Info/Log. There is a log file name published like so: D: \datasync\log\ReadFromOracle-Oracle12c.25944081\CR_AB_DATA_FROM_OCS-AB_OCS_TARGET.20190430.1020.log That log…
  • Jobs->Runs->Tasks subtab. Choose the row you are interested. Click on details. Choose the row that reads INSERT_UPDATE_DATACOPY. Double click on Additional Info/Log. There is a log file name published like so: D: \datasync\log\ReadFromOracle-Oracle12c.25944081\CR_AB_DATA_FROM_OCS-AB_OCS_TARGET.20190430.1020.log That log…
  • Its a not so common usecase. So, probably not documented clearly. But this is the precise use case for the prune time concept. Hope you find this functionality useful.
  • You cannot update it from outside. Manually, yes, you can go to connections -> refresh dates, and update the value. But this will not be sustainable, as you will be doing it prior to every run. Best to use the prune time concept at the job level. This will apply to all the tables being populated in a project.
  • Thats easy - to compensate, you have two ways (in general). 1. If your source DB's timezone is different than the machine where DS runs, then be sure to specify Timezone information for the connection - this way DS will automatically compensate for the time difference between the timezones. 2. If you want to manually…
  • No its not possible. The metadata, and the runtime data are stored in a file based relational database construct called JavaDB, and its not possible to query this database from outside. However, if you want to use it anywhere in the dataload routines, you could create a parameter (under Parameters tab). Choose Timestamp as…
  • Unfortunately there is no direct way of figuring it out. Data Sync uses merge statements to do insert or update in a single sql statement, and from the rows affected (reported in the log file), one would not be able to say which is what. Data Sync uses two writers to push the data, and hence the two counts - they do not…
  • Files are not read incrementally. All the files in the specified location are read. You should purge the old files and just keep the new ones for the loading. You can set "Delete source file(s) upon successful load (true/false)" to true for automatically delete files. Its possible that the new ones were loaded first and…
  • No, not really. As long as the Join is an accepatable SQL-part, you dont have to. You need double quotes if the table name has lower case naming convention (for example "mytable") or has spaces in the name (for example "MY TABLE").
  • The latest version 2.6 is live now. Have the customer download and use it. https://www.oracle.com/technetwork/middleware/oac/downloads/index.html Install in a brand new directory. When started for the first time, choose 'Copy an existing environment' option and point to the current install directory. The process will copy…
  • You do not need $$ prefix. just create a variable called INITIAL_EXTRACT_DATE with custom formatting - in this case use format as MM-dd-yyyy HH:mm:ss (Java notation - refer to https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). While using in a query refer to the parameter as select *,…
  • thats the best way to organize. You can also chain the jobs to run one after the other. Refer at on-demand-etl.xml file where you can trigger a job based on a file signal. At the end of every job, in the log\jobsignal directory, you will find <Job_name>_StartSignal.txt, <Job_name>_CompletedSignal.txt, or…