Skip to Main Content

Integration

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Error on line 3 at column 231: xmlParseCharRef: invalid xmlChar value 31 Using SOA rest Adapter

Pragadeesh PVDec 30 2021

Dear All,
Getting below while invoking the Rest Adapter. Pls share whether you have faced similar and know the solution.
<messages>
-<parsererror style="display: block; white-space: pre; border: 2px solid #c77; padding: 0 1em 0 1em; margin: 1em; background-color: #fdd; color: black">
<h3>This page contains the following errors:</h3>
<div style="font-family:monospace;font-size:12px">
error on line 3 at column 231: xmlParseCharRef: invalid xmlChar value 31
</div>
<h3>Below is a rendering of the page up to the first error.</h3>
</parsererror>
Thanks & Regards,
Hari

Comments

477011
On a side note, a I discovered that a composite unique constraint does prevent data from being added that violates this rule, however this does not solve the problem of finding the data if it already exists.
21205

This is one way of finding out (arbitrarily) duplicate rows:

select *
  from (select key
             , a
             , b
             , row_number() over (partition by a, b 
                                      order by null
                             ) rn
          from t
       )
 where rn > 1
/

It assigns a ranking based on groups of "a, b" combinations and shows you the ones where it ranks higher than one.
If you want to see all the duplicates, you could use this:

select key
     , a
     , b
  from t
 where (a, b) in (select a, b
                    from t
                   group by a, b
                  having Count(*) > 1
                 )

There are probably lots of other (and better) ways to do this.

Laurent Schneider
select * from (select t.*, count(*) over (partition by a,b) c from t) where c>1;
William Robertson
The DELETE command might go something like
DELETE yourtable
WHERE  ROWID IN
       ( SELECT LEAD(ROWID) OVER (PARTITION BY key1, key2 ORDER BY NULL)
         FROM   yourtable );
This would delete any row after the first in each set of duplicates. You could control the ordering by adjusting the ORDER BY to include other columns.
477011
Thanks very much to everyone. Indeed this works great. Unfortunately I am still at a point where a good chunk of the code there looks like a foreign language to me but these posts did help me to see that the solution was not one that I missed somewhere in my learning.

I'll be sure to keep a sharp eye out for the LEAD, OVER, and PARTITION keywords in my further studies. Thanks again!
Michel SALAIS

Try this one:

SELECT Max(Key)
FROM t
GROUP BY A, B
HAVING COUNT(*) > 1

Message was edited by:
Michel SALAIS
Now if you want to delete you can do the following

DELETE t
WHERE ROWID NOT IN
    (SELECT SELECT Max(ROWID)
     FROM t
     GROUP BY A, B)
William Robertson
I was assuming that the number of rows to be deleted was significantly smaller than the number of rows to retain, in which case I would expect

WHERE ROWID IN (subquery returning say 5% of rowids)

to perform better than

WHERE ROWID NOT IN (subquery returning say 95% of rowids)
Michel SALAIS
I was assuming that the number of rows to be deleted
was significantly smaller than the number of rows to
retain, in which case I would expect

WHERE ROWID IN (subquery returning say 5% of rowids)

to perform better than

WHERE ROWID NOT IN (subquery returning say 95% of
rowids)
Agreed
But we don't know what is the situation here ...
1 - 8

Post Details

Added on Dec 30 2021
0 comments
1,048 views