This content has been marked as final. Show 4 replies
871270 wrote:Exactly. As this approach needs to scan the entire database. And as this approach is not how one uses databases!
But it is taking very long time. I need to search entire my database.
So why exactly are you using this approach? How on earth it this a sensible and logical approach to relational design and data modelling?
I have tried like this also.....!! But result is same
DECLARE l_value VARCHAR2(100) := ' VALUE ' ; l_cnt PLS_INTEGER; l_sql VARCHAR2(1000); BEGIN FOR x IN ( SELECT table_name, column_name FROM user_tab_cols WHERE user = 'MY SCHEMA NAME' AND data_type IN ('CHAR', 'VARCHAR2' ) ) LOOP l_sql := 'SELECT COUNT(*) FROM ' || user || '.' || x.table_name || ' WHERE ' || x.column_name || ' = :1'; EXECUTE IMMEDIATE l_sql INTO l_cnt USING l_value; IF( l_cnt > 0 ) THEN dbms_output.put_line( 'Found the value ' || l_value || ' in owner = ' || USER || ' table name = ' || x.table_name || ' column name = ' || x.column_name ); END IF; END LOOP; END;
You have not answered the question as to WHY this needs to be done? What are the business requirements behind this (seemingly insane) approach?
As for the approach. It WILL be slow. ALWAYS.
It hits an entire table looking for relevant rows. So if there are a 100 tables, it means at minimum, a 100 full table (or full index) scans.
A table has typically more than one varchar2 or char columns. The entire table is scanned once per such column.
Let's say that on average, these 100 tables each has 10 such columns. This means each table (the SAME data) is read again, and again, and again. 10x.
Let's say these 100 tables are 10Gb. This means that same 10Gb is read 10x - so a 100Gb worth of I/O is done on 10Gb of data.
So WHY do you need to do this???