Skip to Main Content

Portuguese

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Device Showcase

Pabloc-OracleMay 19 2015 — edited May 21 2015

The Device Showcase is a set of 3 booths at JavaHub that intend to display real cases of Java technology being used in areas such as automation, embedded development, IoT, and Cloud.

Details on the projects that will be demonstrated in the Device Showcase at JavaHub:


Java TV Digital Interatividade

Digital: Java rodando na TV com Ginga-J

UX4

www.ux4.tv.br

A aplicação Ao Cubo, desenvolvida pela UX4 para a TV Aparecida em parceria com a produtora 8 milímetros, surgiu da necessidade de interligar os telespectadores de um novo programa para que estes pudessem ter acesso a informação e conteúdo do programa diário, mas que também tivessem a oportunidade de interagir enquanto o programa estava sendo transmitido. Para este ponto, a aplicação possui uma enquete com temas debatidos nos programas semanais, possibilitando que o telespectador vote nesta enquete e ao final, apresentado a votação final. Assim como, informações sobre o apresentador e seus entrevistados.

screenshot_java-tv-digital.jpgscreenshot_java-tv-digital_02.jpg

Team Members:

Bruno_Gualda.jpg Thiago_Vespa.jpg

Bruno Gualda Thiago Vespa


Java and IoT

Surfando na Internet das Coisas

Globalcode

www.globalcode.com.br

Um laptop de madeira com tela touchscreen, usando simultaneamente um Intel Edison e um Raspberry Pi com um microkernel em Java que permite a troca de tarefas em tempo real através do conceito de one-button-many-tasks que facilitam a operação de sistemas embarcados multi-plataforma e basesados em Java. O Edison Pi é feito em compensado naval: material bom de tocar, cheira bem e tem um aspecto retro único aliado a um acendedor de cigarros funcional e sistema multibaterias garantam uma peça única com a tecnologia Java.

Conheça também o Galileo Robot, um robot bluetooth rodando Java SE no Intel Galileo que se comunica via gestos com Intel Edison. Vamos mostrar como você pode usar ZigBee com Java para projetos indústriais e comerciais usando a mais nova placa brasileira IoT Surfboard. E também

vamos apresentar como integrar a camera Intel Real Sense com MQTT via Eclipse Paho para utilizar gestos e comandar coisas da internet, remotamente!

screenshot_Java-IoT.jpgscreenshot_Java-IoT_02.jpg

Team Members:

Yara_Senger.jpgVinicius_Senger.jpg

Yara Senger Vinicius Senger


Java Embedded

Desenvolvendo produtos conectados e inteligentes com Java Embarcado

V2COM

www.v2com.com.br

Conheça como a V2COM desenvolve produtos conectados e inteligentes utilizando a linguagem Java, desde o dispositivo até o datacenter. Nessa demonstração, exibimos um de nossos produtos, um submedidor para monitoramento de consumo de energia residencial e comercial.

screenshot_Java-Embedded.jpgscreenshot_Java-Embedded_02.jpg

Team Member:

leonardo-lima.jpg

Leonardo Lima


Java RFID

Middleware RFID Embardcado, IoT em todas as coisas

Eficid

www.eficid.com

Apresentação do Middleware RFID embarcado, desenvolvido utilizando a plataforma java para deploy em vários controladores - Intel Edison, Intel Galileo, Beaglebone Black, Raspberry pi.

Desenvolvemos uma placa integradora que permite a ligação de um leitor RFID de 900 MHz de longo alcance, operado via comando serial. Neste projeto é utilizado o Leitor RU-861 da MTI (http://www.mtigroup.com/_english/02_products/03products.php?MainID=12&SID=49&ID=170 ). Aumentamos a amplitude de sensoriamento levando a possibilidade de identificar em tempo real qualquer objeto (vestuário, medicamentos, itens de varejo, pessoas), através da aplicação de uma etiqueta RFID passiva.

Para o JavaOne Latam, será apresentado um portal de pessoas com uma tela de boas vindas, os dados da pessoa são armazenados em um sistema web construído com a plataforma Java e a apresentação utiliza uma aplicação JavaFX e tudo se comunica via protocolo MQTT.

screenshot_Java-RFID.jpgscreenshot_Java-RFID_02.jpg

Team Member:

luis-avila.jpg

Luis Avilla


Java IoT

Cloud Tweet4Beer! Torneira de chope controlada por Java, Cloud, e IoT!

Oracle

www.oracle.com

Esteja avisado! Nesta sessão os participantes com idade superior a 18 anos podem ficar bêbados. Vamos demonstrar, end-to-end, como fazer você mesmo uma torneira de chope controlada por Java e conectada na Internet! Vamos mostrar um aplicativo visual JavaFX usando Java SE 8 e rodando em um Raspberry Pi. Uma aplicação Java EE para processar os dados do Twitter e se comunicar com o RaspberryPi.

screenshot_Java-IoT-Cloud.jpgscreenshot_Java-IoT-Cloud_02.jpg

Team Member:

marco-maciel.jpg

Marco Maciel


Schedule

Tuesday, June 23rd - 11 am - 5:30 pm

| Booth | Project |
| Booth A | Tweet 4 Beer |
| Booth B | TBD |
| Booth C | TBD |

Wednesday, June 24th - 11 am - 7:30 pm

| Booth | Project |
| Booth A | Tweet TV Digital |
| Booth B | TBD |
| Booth C | TBD |

Thursday, June 25th - 9:30 am - 6:00 pm

| Booth | Project |
| Booth A | Tweet 4 Beer |
| Booth B | TBD |
| Booth C | TBD |

Comments

mathguy

Forget everything else and focus on the "dynamic" part of "dynamic pivoting".  What is your plan for doing that in a single SQL query? As you may or may not know, a "single SQL query" is considered static - in direct opposition to "dynamic".

User_OMEF8

mathguy wrote:

Forget everything else and focus on the "dynamic" part of "dynamic pivoting". What is your plan for doing that in a single SQL query? As you may or may not know, a "single SQL query" is considered static - in direct opposition to "dynamic".

I might have used wrong terminology then in the Oracle world.  My end goal would be to have this available through a web app where the only information the user enters is a date range.  The date range would then be used to "insert" into the query to retrieve data.  For instance, if the user entered Start Date "2019-08-01" and End Date "2019-08-07", it would create "2019-08-01", "2019-08-02", "2019-08-03", "2019-08-04", "2019-08-05", "2019-08-06", "2019-08-07", as the column headers (along with the Store and Items), but I am trying to see if I can even get the query right.

Frank Kulash

Hi,

Whenever you have a question, please post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) for all the tables involved, and the exact results you want from that data, so that the people who want to help you can re-create the problem and test their ideas.

Explain, using specific examples, how you get those results from that data.

Always say what version of Oracle you're using (e.g. 12.2.0.2.0).

See the forum FAQ:

If you want the number of columns in the output to depend on what the user enters at run-time, then you need Dynamic SQL.  It's not very complicated, as dynamic SQL goes, but it's still something you might want to avoid.  Your front end may have tools for making this easier.  What front end are you using?

One way to avoid dynamic SQL is to put the output in a huge string column that is formatted to look like a dynamic number of columns. so a person reading the output thinks it is separate columns.

Another approach is to allow a maximum number of output columns.  For example, you could set a limit of 14 columns, so the user could specify a date range of up to two weeks, and get a separate column of output for each calendar day.  If the user specifies fewer than 14 days, then the output would have some empty columns at the end.

Do you want to try either of these static SQL approaches?

976563 wrote:

...

Here is my attempt SQL query...

select * from (

select sn.storenamecolumn, i.itemcolumn, trunc(d.solddatecolumn)

from itemtable i

join storetable s on i.id = s.id

join storenametable sn on s.storeid = sn.storename

join itemdatetable d on i.itemid = d.id

where i.itemcolumn in ('PEN', 'PENCIL', 'ERASER')

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

)

pivot

(

count(*)

for trunc(solddatecolumn) in ('2019-08-01', '2019-08-02', '2019-08-03') <============ error missing IN keyword

)

where storenamecolumn in ('DOWNTOWN', 'SUBURB')

order by storenamecolumn;

I keep getting the error above and I believe it has to do with the trunc() method. If I remove the trunc() method from both the "for trunc()" and the SELECT Statement, then it will run, but the results are not right because I am only looking for a date and not the entire "timestamp". Any suggestions on how I can achieve my desired results? Thanks.

In the PIVOT clause, you can't use a function (like TRUNC) after the FOR keyword; only a column name is allowed.  You can use TRUNC in the sub-query.

Also, don't try to compare TIMESTAMPs (like storenamecolumn) or DATEs (like TRUNC (storenamecolumn)) to strings (like '2019-08-01').  You can compare DATEs to DATEs like this:

FOR  truncsolddatecolumn IN ( DATE '2019-08-01'  AS d_2019_08_01

                            , DATE '2019-08-02'  AS d_2019_08_02

                            , DATE '2019-08-03'  AS d_2019_08_03

                            ) 

where truncsolddatecolumn is defined in the sub-query as TRUNC (soldatecolumn).

[EDIT: I just noticed that you are using TRUNC in the sub-query already.  That's good; give that column an alias; then you can use that alias in a PIVOT clause, after the FOR keyword.]

Also, is it possible to calculate the total average for each day and put it at the bottom of each Date column? I edited the table above.

GROUP BY ROLLUP (or GROUP BY GROUPING SETS) is good for getting sub-totals like that.

You can have a COUNT on some rows and an average on other rows; they're both numbers.

If you really want text on some rows (e.g. '2.67 (AVERAGE DATE DIFFERENCE)' ) then you'd need to make that column a string.

User_OMEF8

Frank Kulash wrote:

If you want the number of columns in the output to depend on what the user enters at run-time, then you need Dynamic SQL. It's not very complicated, as dynamic SQL goes, but it's still something you might want to avoid. Your front end may have tools for making this easier. What front end are you using?

One way to avoid dynamic SQL is to put the output in a huge string column that is formatted to look like a dynamic number of columns. so a person reading the output thinks it is separate columns.

Another approach is to allow a maximum number of output columns. For example, you could set a limit of 14 columns, so the user could specify a date range of up to two weeks, and get a separate column of output for each calendar day. If the user specifies fewer than 14 days, then the output would have some empty columns at the end.

Do you want to try either of these static SQL approaches?

Initially, I wanted to build them a ASP page to be able to input their date range of what they would like.  After I discussed it with them, we agreed to run it on a weekly basis.  So, I can spool the file through a scheduled task.

Frank Kulash wrote:

976563 wrote:

...

Here is my attempt SQL query...

select * from (

select sn.storenamecolumn, i.itemcolumn, trunc(d.solddatecolumn)

from itemtable i

join storetable s on i.id = s.id

join storenametable sn on s.storeid = sn.storename

join itemdatetable d on i.itemid = d.id

where i.itemcolumn in ('PEN', 'PENCIL', 'ERASER')

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

)

pivot

(

count(*)

for trunc(solddatecolumn) in ('2019-08-01', '2019-08-02', '2019-08-03') <============ error missing IN keyword

)

where storenamecolumn in ('DOWNTOWN', 'SUBURB')

order by storenamecolumn;

I keep getting the error above and I believe it has to do with the trunc() method. If I remove the trunc() method from both the "for trunc()" and the SELECT Statement, then it will run, but the results are not right because I am only looking for a date and not the entire "timestamp". Any suggestions on how I can achieve my desired results? Thanks.

In the PIVOT clause, you can't use a function (like TRUNC) after the FOR keyword; only a column name is allowed. You can use TRUNC in the sub-query.

Also, don't try to compare TIMESTAMPs (like storenamecolumn) or DATEs (like TRUNC (storenamecolumn)) to strings (like '2019-08-01'). You can compare DATEs to DATEs like this:

FOR truncsolddatecolumn IN ( DATE '2019-08-01' AS d_2019_08_01

, DATE '2019-08-02' AS d_2019_08_02

, DATE '2019-08-03' AS d_2019_08_03

)

where truncsolddatecolumn is defined in the sub-query as TRUNC (soldatecolumn).

[EDIT: I just noticed that you are using TRUNC in the sub-query already. That's good; give that column an alias; then you can use that alias in a PIVOT clause, after the FOR keyword.]

Thanks for the alias suggestion.  That part did the trick for the query to run.  I see data in my results now, but when I try to compute the statistics, it keeps giving me 0.  My updated query is below.  Any idea why it keeps giving me 0?

select * from (

     select sn.storenamecolumn, i.itemcolumn, trunc(d.solddatecolumn) - trunc(i.arrivaldatecolumn) as truncdifference

     from itemtable i

     join storetable s on i.id = s.id

     join storenametable sn on s.storeid = sn.storename

     join itemdatetable d on i.itemid = d.id

     where i.itemcolumn in ('PEN', 'PENCIL', 'ERASER')

     and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

     )

     pivot

     (

     count(*)

     for truncdifference in (DATE '2019-08-01' as d_2019_08_01,

                                        DATE '2019-08-02' as d_2019_08_02,

                                        DATE '2019-08-03' as d_2019_08_03)

     )

where storenamecolumn in ('DOWNTOWN', 'SUBURB')

order by storenamecolumn;

Frank Kulash wrote:

Also, is it possible to calculate the total average for each day and put it at the bottom of each Date column? I edited the table above.

GROUP BY ROLLUP (or GROUP BY GROUPING SETS) is good for getting sub-totals like that.

You can have a COUNT on some rows and an average on other rows; they're both numbers.

If you really want text on some rows (e.g. '2.67 (AVERAGE DATE DIFFERENCE)' ) then you'd need to make that column a string.

Thanks for the rollup tip.  I did not even know about this function.  I am currently reading up on it right now on an Oracle page (https://www.oracletutorial.com/oracle-basics/oracle-rollup/ ).  I haven't even been able to try this out yet though, since I am still stuck on the previous issue.  I will only want it as a number value.  I only added the (AVERAGE DATE DIFFERENCE) to depict what I wanted the value to represent.

Frank Kulash

Hi,

976563 wrote:

...

I see data in my results now, but when I try to compute the statistics, it keeps giving me 0. My updated query is below. Any idea why it keeps giving me 0?

select * from (

select sn.storenamecolumn, i.itemcolumn, trunc(d.solddatecolumn) - trunc(i.arrivaldatecolumn) as truncdifference

from itemtable i

join storetable s on i.id = s.id

join storenametable sn on s.storeid = sn.storename

join itemdatetable d on i.itemid = d.id

where i.itemcolumn in ('PEN', 'PENCIL', 'ERASER')

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

)

pivot

(

count(*)

for truncdifference in (DATE '2019-08-01' as d_2019_08_01,

DATE '2019-08-02' as d_2019_08_02,

DATE '2019-08-03' as d_2019_08_03)

)

where storenamecolumn in ('DOWNTOWN', 'SUBURB')

order by storenamecolumn;

...

Once again, post a little sample data and the exact results you want to see from that data.

You're defining truncdifference as a NUMBER (the minus operator,  - , returns a NUMBER), then comparing that NUMBER to DATEs.  Display the results of your in-line view without pivoting them.  Notice that truncdifference has values like 1 or 3, not DATE '2019-08-01' or DATE '2019-08-03'.

Frank Kulash

Hi,

By the way:

976563 wrote:

...

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

...

It's more efficient (sometimes much, much more efficient) to use raw columns in conditions, like this:

and  d.solddatecolumn >= to_date('2019-08-01', 'yyyy-mm-dd')

and  d.solddatecolumn  < to_date('2019-08-04', 'yyyy-mm-dd')

The main reason is that it allows the optimizer to use an index on solddatecolumn.

It's also faster because it avoids calling TRUNC twice for every row in the table.

User_OMEF8

Frank Kulash wrote:

Hi,

By the way:

976563 wrote:

...

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

...

It's more efficient (sometimes much, much more efficient) to use raw columns in conditions, like this:

and d.solddatecolumn >= to_date('2019-08-01', 'yyyy-mm-dd')

and d.solddatecolumn < to_date('2019-08-04', 'yyyy-mm-dd')

The main reason is that it allows the optimizer to use an index on solddatecolumn.

It's also faster because it avoids calling TRUNC twice for every row in the table.

Thanks for the tip again!

I apologize, but I am not sure what you mean by sample data.  Is it from each table?  The output of what I would like is in my original post.  I see what you mean by comparing Numbers to Dates and I think the difficulty with this query is the calculation between my "Date Sold - Date Arrived" and using that Number value to apply for each "Item" and "Store", since I want to display the results on a per date basis.  I did try the ROLLUP and it works when I do not use a PIVOT.  I get the results, but I cannot seem to get it to display for each day within my specified date range.  I am still trying to work through that.

Edit:  The ROLLUP gives me the sum, but I am actually looking for the average.

Frank Kulash

Hi,

976563 wrote:

Frank Kulash wrote:

Hi,

By the way:

976563 wrote:

...

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

...

It's more efficient (sometimes much, much more efficient) to use raw columns in conditions, like this:

and d.solddatecolumn >= to_date('2019-08-01', 'yyyy-mm-dd')

and d.solddatecolumn < to_date('2019-08-04', 'yyyy-mm-dd')

The main reason is that it allows the optimizer to use an index on solddatecolumn.

It's also faster because it avoids calling TRUNC twice for every row in the table.

Thanks for the tip again!

I apologize, but I am not sure what you mean by sample data. .

Here's an example of how to post sample data:

CREATE TABLE    sales
(   store       VARCHAR2 (10)
,   item        VARCHAR2 (10)
);

INSERT INTO sales (store, item) VALUES ('DOWNTOWN', 'PEN');
INSERT INTO sales (store, item) VALUES ('SUBURB',   'ERASER');

In this problem, the sample data will have more rows and more columns.

Is it from each table?

Yes.  If you need two or more tables to show what the problem is, then post sample data for all of them.

Edit: The ROLLUP gives me the sum, but I am actually looking for the average

ROLLUP works with SUM, AVG, MIN, LISTAGG and most (if not all) other aggregate functions.

If you can use it with SUM, but not AVG, then post both of your queries (the one using SUM that works, and the one using AVG that doesn't).

User_OMEF8

Frank Kulash wrote:

Hi,

976563 wrote:

Frank Kulash wrote:

Hi,

By the way:

976563 wrote:

...

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

...

It's more efficient (sometimes much, much more efficient) to use raw columns in conditions, like this:

and d.solddatecolumn >= to_date('2019-08-01', 'yyyy-mm-dd')

and d.solddatecolumn < to_date('2019-08-04', 'yyyy-mm-dd')

The main reason is that it allows the optimizer to use an index on solddatecolumn.

It's also faster because it avoids calling TRUNC twice for every row in the table.

Thanks for the tip again!

I apologize, but I am not sure what you mean by sample data. .

Here's an example of how to post sample data:

CREATE TABLE sales
( store VARCHAR2 (10)
, item VARCHAR2 (10)
);

INSERT INTO sales (store, item) VALUES ('DOWNTOWN', 'PEN');
INSERT INTO sales (store, item) VALUES ('SUBURB', 'ERASER');

In this problem, the sample data will have more rows and more columns.

Is it from each table?

Yes. If you need two or more tables to show what the problem is, then post sample data for all of them.

Edit: The ROLLUP gives me the sum, but I am actually looking for the average

ROLLUP works with SUM, AVG, MIN, LISTAGG and most (if not all) other aggregate functions.

If you can use it with SUM, but not AVG, then post both of your queries (the one using SUM that works, and the one using AVG that doesn't).

Thanks, Frank!

I couldn't figure out how to do the query using PIVOT, so I actually re-wrote my query to use a WITH.  The downside is that I still do not know how to get it to output the result across a date range, so I am having to do it on a per day basis instead.  As for the Average, I discovered a built-in function called AVG() and I am using that without having to do a ROLLUP call.

Frank Kulash

Hi,

976563 wrote:

Frank Kulash wrote:

Hi,

976563 wrote:

Frank Kulash wrote:

Hi,

By the way:

976563 wrote:

...

and trunc(d.solddatecolumn) >= to_date('2019-08-01', 'yyyy-mm-dd') and trunc(d.solddatecolumn) < to_date('2019-08-04', 'yyyy-mm-dd')

...

It's more efficient (sometimes much, much more efficient) to use raw columns in conditions, like this:

and d.solddatecolumn >= to_date('2019-08-01', 'yyyy-mm-dd')

and d.solddatecolumn < to_date('2019-08-04', 'yyyy-mm-dd')

The main reason is that it allows the optimizer to use an index on solddatecolumn.

It's also faster because it avoids calling TRUNC twice for every row in the table.

Thanks for the tip again!

I apologize, but I am not sure what you mean by sample data. .

Here's an example of how to post sample data:

CREATE TABLE sales
( store VARCHAR2 (10)
, item VARCHAR2 (10)
);

INSERT INTO sales (store, item) VALUES ('DOWNTOWN', 'PEN');
INSERT INTO sales (store, item) VALUES ('SUBURB', 'ERASER');

In this problem, the sample data will have more rows and more columns.

Is it from each table?

Yes. If you need two or more tables to show what the problem is, then post sample data for all of them.

Edit: The ROLLUP gives me the sum, but I am actually looking for the average

ROLLUP works with SUM, AVG, MIN, LISTAGG and most (if not all) other aggregate functions.

If you can use it with SUM, but not AVG, then post both of your queries (the one using SUM that works, and the one using AVG that doesn't).

Thanks, Frank!

I couldn't figure out how to do the query using PIVOT, so I actually re-wrote my query to use a WITH. The downside is that I still do not know how to get it to output the result across a date range, so I am having to do it on a per day basis instead. As for the Average, I discovered a built-in function called AVG() and I am using that without having to do a ROLLUP call.

If you'd like help, post CREATE TABLE and INSERT statements for a little sample data.

If the results you want from that data are not exactly what you posted in your original message (and clarified in reply #4), then post the desired results, too.

User_OMEF8

Here is the sample you asked for.  I will have to post the output tomorrow.  The results would follow the same concept as my original table above in the first post, but the numbers are going to be different.

insert into z_itemtable values ('1', 'PENCIL', '2019-08-26 05.00.00.000000000 PM', 'A');

insert into z_itemtable values ('2', 'PENCIL', '2019-08-27 05.00.00.000000000 PM', 'B');

insert into z_itemtable values ('3', 'PENCIL', '2019-08-28 05.00.00.000000000 PM', 'C');

insert into z_itemtable values ('4', 'PENCIL', '2019-08-29 05.00.00.000000000 PM', 'D');

insert into z_itemtable values ('5', 'PENCIL', '2019-08-30 05.00.00.000000000 PM', 'E');

insert into z_itemtable values ('6', 'PEN', '2019-08-28 05.00.00.000000000 PM', 'F');

insert into z_itemtable values ('7', 'PEN', '2019-08-29 05.00.00.000000000 PM', 'G');

insert into z_itemtable values ('8', 'PEN', '2019-08-30 05.00.00.000000000 PM', 'H');

insert into z_itemtable values ('9', 'ERASER', '2019-08-29 05.00.00.000000000 PM', 'I');

insert into z_itemtable values ('10', 'ERASER', '2019-08-30 05.00.00.000000000 PM', 'J');

insert into z_itemtable values ('11', 'PENCIL', '2019-08-27 05.00.00.000000000 PM', 'K');

insert into z_itemtable values ('12', 'ERASER', '2019-08-29 05.00.00.000000000 PM', 'L');

select * from z_itemtable;

insert into z_mastertable values ('1', 'DT');

insert into z_mastertable values ('2', 'DT');

insert into z_mastertable values ('3', 'SB');

insert into z_mastertable values ('4', 'SB');

insert into z_mastertable values ('5', 'SB');

insert into z_mastertable values ('6', 'EA');

insert into z_mastertable values ('7', 'EA');

insert into z_mastertable values ('8', 'EA');

insert into z_mastertable values ('9', 'WS');

insert into z_mastertable values ('10', 'WS');

insert into z_mastertable values ('11', 'DT');

insert into z_mastertable values ('12', 'DT');

select * from z_mastertable;

insert into z_storetable values ('DT', 'DOWNTOWN');

insert into z_storetable values ('SB', 'SUBURB');

insert into z_storetable values ('EA', 'EAST');

insert into z_storetable values ('WS', 'WEST');

select * from z_storetable;

insert into z_itemtabledetail values ('A', '2019-09-30 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('B', '2019-09-15 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('C', '2019-09-12 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('D', '2019-09-26 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('E', '2019-09-22 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('F', '2019-09-03 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('G', '2019-09-01 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('H', '2019-09-11 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('I', '2019-09-22 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('J', '2019-09-14 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('K', '2019-09-02 05.00.00.000000000 PM');

insert into z_itemtabledetail values ('L', '2019-09-07 05.00.00.000000000 PM');

select * from z_itemtabledetail;

User_OMEF8

CREATE TABLE "Z_ITEMTABLE"

   ( "MASTERID" VARCHAR2(20 BYTE),

"ITEMNAME" VARCHAR2(20 BYTE),

"ITEMARRIVALDATE" TIMESTAMP (6),

"DETAILID" VARCHAR2(20 BYTE)

   )

CREATE TABLE "Z_ITEMTABLEDETAIL"

   ( "ID" VARCHAR2(20 BYTE),

"DATESOLD" TIMESTAMP (6)

   )

CREATE TABLE "Z_MASTERTABLE"

   ( "ID" VARCHAR2(20 BYTE),

"STORECODE" VARCHAR2(20 BYTE)

   )

CREATE TABLE "Z_STORETABLE"

   ( "CODE" VARCHAR2(20 BYTE),

"STORENAME" VARCHAR2(20 BYTE)

   )

User_OMEF8

Partial...not working query

select * from (

    select st.storename, it.itemname, trunc(it.itemarrivaldate) as datesoldz

    from z_itemtable it

    join z_mastertable m on it.masterid = m.id

    join z_storetable st on m.storecode = st.code

    join z_itemtabledetail itd on it.detailid = itd.id

    where it.itemarrivaldate >= to_date('2019-08-26', 'yyyy-mm-dd') and it.itemarrivaldate < to_date('2019-08-31', 'yyyy-mm-dd')

    )

    pivot

    (

    count(*)

    for datesoldz in (DATE '2019-08-26',

                      DATE '2019-08-27',

                      DATE '2019-08-28',

                      DATE '2019-08-29',

                      DATE '2019-08-30')

    )

where storename in ('DOWNTOWN', 'SUBURB', 'EAST', 'WEST')

group by storename, itemname, 3

order by storename;

Frank Kulash

Hi,

976563 wrote:

Here is the sample you asked for. I will have to post the output tomorrow. The results would follow the same concept as my original table above in the first post, but the numbers are going to be different.

insert into z_itemtable values ('1', 'PENCIL', '2019-08-26 05.00.00.000000000 PM', 'A');

insert into z_itemtable values ('2', 'PENCIL', '2019-08-27 05.00.00.000000000 PM', 'B');

...

Many of those INSERT statements fail on my system

Don't try to insert strings, such as '2019-08-26 05.00.00.000000000 PM', into TIMESTAMP columns, such as z_itemtable.itemarrivaldate.

User_OMEF8

Ahh, the INSERTs worked when I ran them.  I am on Oracle Database 11g Release 11.2.0.4.0 - 64bit.

The desired output would be like...

STOREITEMS2019-08-26 Sum
2019-08-26 Avg2019-08-27 Sum2019-08-27 Avg2019-08-28 Sum2019-08-28 Avg2019-08-29 Sum2019-08-29 Avg2019-08-30 Sum2019-08-30 Avg
DOWNTOWNERASER0000009900
DOWNTOWNPENCIL35352512.5000000
EASTPEN000066331212
SUBURBPENCIL0000151528282323

Row 1 ==> Only 1 found, the value is 9 because the (datesold minus itemarrivaldate [2019-09-07 minus 2019-08-29 ==> 9]).  This becomes the SUM and AVG.

Row 2 ==> 2019-08-26 only 1 account is found. The value is 35 because (datesold minus itemarrivaldate [2019-09-30 minus 2019-08-26 ==> 35]).  This becomes the SUM and AVG.

Row 2 ==> 2019-08-27 there are 2 accounts.  1st account (2019-09-15 minus 2019-08-27 ==> 19).  2nd account (2019-09-02 minus 2019-08-27 ==> 6).  19 + 6 => 25 SUM, 12.5 AVG

Row 3 and 4 follow the same logic as Row 1 because there was only 1 row found.

Frank Kulash
Answer

Hi,

976563 wrote:

Ahh, the INSERTs worked when I ran them. I am on Oracle Database 11g Release 11.2.0.4.0 - 64bit.

The desired output would be like...

STORE
ITEMS
2019-08-26 Sum
2019-08-26 Avg
2019-08-27 Sum
2019-08-27 Avg
2019-08-28 Sum
2019-08-28 Avg
2019-08-29 Sum
2019-08-29 Avg
2019-08-30 Sum
2019-08-30 Avg
DOWNTOWNERASER0000009900
DOWNTOWNPENCIL35352512.5000000
EASTPEN000066331212
SUBURBPENCIL0000151528282323

Row 1 ==> Only 1 found, the value is 9 because the (datesold minus itemarrivaldate [2019-09-07 minus 2019-08-29 ==> 9]). This becomes the SUM and AVG.

Row 2 ==> 2019-08-26 only 1 account is found. The value is 35 because (datesold minus itemarrivaldate [2019-09-30 minus 2019-08-26 ==> 35]). This becomes the SUM and AVG.

Row 2 ==> 2019-08-27 there are 2 accounts. 1st account (2019-09-15 minus 2019-08-27 ==> 19). 2nd account (2019-09-02 minus 2019-08-27 ==> 6). 19 + 6 => 25 SUM, 12.5 AVG

Row 3 and 4 follow the same logic as Row 1 because there was only 1 row found.

Earlier, you were interested in a subtotal showing all items from each store, as well as the individual store/item combinations.  Now you've changed the requirements to show only the individual items; is that right?

Here's one way to do that:

WITH    data_to_pivot    AS

(

    SELECT    st.storename

    ,       it.itemname

    ,         TRUNC (it.itemarrivaldate)  AS datearrived

    ,         TRUNC (itd.datesold)

            - TRUNC (it.itemarrivaldate)  AS ndays

    FROM      z_itemtable         it

    JOIN      z_mastertable        m  ON  it.masterid  = m.id

    JOIN      z_storetable        st  ON  m.storecode  = st.code

    JOIN      z_itemtabledetail  itd  ON  it.detailid  = itd.id

    WHERE     it.itemarrivaldate  >= TO_DATE ('2019-08-26', 'yyyy-mm-dd')

    AND       it.itemarrivaldate  <  TO_DATE ('2019-08-31', 'yyyy-mm-dd')

    AND       st.storename        IN ('DOWNTOWN', 'SUBURB', 'EAST', 'WEST')

)

SELECT    *

FROM      data_to_pivot

PIVOT     (    SUM (ndays)   AS sum

          ,    AVG (ndays)   AS avg

          FOR  datearrived IN ( DATE '2019-08-26'  AS d_2019_08_26

                              , DATE '2019-08-27'  AS d_2019_08_27

                              , DATE '2019-08-28'  AS d_2019_08_28

                              , DATE '2019-08-29'  AS d_2019_08_29

                              , DATE '2019-08-30'  AS d_2019_08_30

                              )

   )

ORDER BY  storename, itemname

;

The results I get are a little different from what you posted:

                               D_2019 D_2019 D_2019 D_2019 D_2019 D_2019 D_2019 D_2019 D_2019 D_2019

                               _08_26 _08_26 _08_27 _08_27 _08_28 _08_28 _08_29 _08_29 _08_30 _08_30

STORENAME ITEMNAME               _SUM   _AVG   _SUM   _AVG   _SUM   _AVG   _SUM   _AVG   _SUM   _AVG

--------- -------------------- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------

DOWNTOWN  ERASER                                                              9      9

DOWNTOWN  PENCIL                   35     35     25   12.5

EAST      PEN                                                   6      6      3      3     12     12

SUBURB    PENCIL                                               15     15     28     28     23     23

WEST      ERASER                                                             24     24     15     15

I get a row for storename='WEST'.  Did you mean to include that row in your desired results?

As posted, the solution shows NULLs for missing data.  You can use NVL to get 0's instead, if you want them.

Marked as Answer by User_OMEF8 · Sep 27 2020
User_OMEF8

Frank Kulash wrote:

Earlier, you were interested in a subtotal showing all items from each store. Now you've changed the requirements to show only the individual items; is that right?

Thank you for your help.  Yes, the requirements changed this morning after I spoke with the user (it seems to be very common from my experience unfortunately).  I greatly appreciate your help.  When I re-wrote the query, I used a WITH, but then I had all of my dates as a single column that went down by each row and not across as column headers.  The PIVOT you used differed much from mine, but I will decode your code to understand it better.  Thanks again!

User_OMEF8

Frank Kulash wrote:

I get a row for storename='WEST'. Did you mean to include that row in your desired results?

As posted, the solution shows NULLs for missing data. You can use NVL to get 0's instead, if you want them.

Sorry, I realized I did not answer your other question.  Yes, I had that in the result, but it looks like I did not include it in my desired result set.

Frank Kulash

Hi,

976563 wrote:

...

When I re-wrote the query, I used a WITH, but then I had all of my dates as a single column that went down by each row and not across as column headers. The PIVOT you used differed much from mine, but I will decode your code to understand it better. Thanks again!

To understand pivots, it often helps to run the exact same query without the pivot, like this:

WITH    data_to_pivot    AS

(

    SELECT    st.storename

    ,         it.itemname

    ,         TRUNC (it.itemarrivaldate)  AS datearrived

    ,         TRUNC (itd.datesold)

            - TRUNC (it.itemarrivaldate)  AS ndays

    FROM      z_itemtable         it

    JOIN      z_mastertable        m  ON  it.masterid  = m.id

    JOIN      z_storetable        st  ON  m.storecode  = st.code

    JOIN      z_itemtabledetail  itd  ON  it.detailid  = itd.id

    WHERE     it.itemarrivaldate  >= TO_DATE ('2019-08-26', 'yyyy-mm-dd')

    AND       it.itemarrivaldate  <  TO_DATE ('2019-08-31', 'yyyy-mm-dd')

    AND       st.storename        IN ('DOWNTOWN', 'SUBURB', 'EAST', 'WEST')

)

SELECT    *

FROM      data_to_pivot

/* PIVOT     (    SUM (ndays)   AS sum

          ,    AVG (ndays)   AS avg

          FOR  datearrived IN ( DATE '2019-08-26'  AS d_2019_08_26

                              , DATE '2019-08-27'  AS d_2019_08_27

                              , DATE '2019-08-28'  AS d_2019_08_28

                              , DATE '2019-08-29'  AS d_2019_08_29

                              , DATE '2019-08-30'  AS d_2019_08_30

                              )

   ) */

ORDER BY  storename, itemname

;

Make sure you understand why this gets the results it does before trying to understand the pivot.

1 - 19

Post Details

Added on May 19 2015
0 comments
729 views