Thanks Srini for the update.
I just wanted to make sure, it's also applicable for a DMZ setup without any other changes to make.
Got with the message below, for 4. Disable the old Appl_Tops: can we ignore it?
SQL> @adphatin.sql manager node2
Connecting to SYSTEM schema...
Inactivating APPL_TOP node2 ...
error while deactivating appl_top node2
appltop not found
Disconnected from Oracle Database 11g Enterprise Edition Release 220.127.116.11.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
I think you can - but best to open an SR with Support
Srini Chavali-Oracle wrote:
See MOS Doc 1250254.1
If you follow the note executing steps 1-3, fnd_conc_clone.setup_clean and execute autoconfig on database and the nodes you want to keep (DB1, App1, N1), you are basically evicting N2. Please check ad_appl_tops to see what name should be.
The entry for Node2 is missing from the ad_appl_tops.
Then just execute the steps described above and it will not appear in FND_NODES anymore. Was Node2 a shared appl_top in the DMZ?
Yes, Node2 has a shared APPL_TOP in the DMZ.
If Node2 only appears in FND_NODES table, then just execute steps 1-3, fnd_conc_clone.setup_clean and execute autoconfig on database and the nodes you want to keep (DB1, App1, N1). There is no requirement to merge appl_tops since you had shared_appl_top and this node does not exist in ad_appl_tops anyway.
Thanks for the update Micheal.
I have some other queries on the same note id (Step 3).
"The context files on the remaining nodes need to be updated removing all references to the removed node(s)."
what exactly this refers to?
In this case the CONTEXT_FILE of Node1 has the "APPL_TOP_NAME" set to Node2.
Now, after evicting the Node2 the CONTEXT_FILE of Node1 should point to Node1 for "APPL_TOP_NAME" entry?
Node1's CONTEXT_FILE, after node eviction(Node2):
1 person found this helpful
Yes. Before you run autoconfig on Node1, ensure $CONTEXT_FILE has no references to Node2. Run autoconfig on the dmz Node1 before you run on App1.
Do we have any checks to make after completing this task(node eviction)?
The query below gives me the list of profile name whose value contains the string 'isupport' most of them are just URLs.
po.profile_option_name as name
, decode(to_char(pov.level_id),'10001','SITE','10002','APP','10003','RESP','10005','SERVER','10006','ORG','10004','USER', '???') as "LEVEL"
, decode(to_char(pov.level_id),'10001','','10002', app.application_short_name,'10003', rsp.responsibility_key,'10005', svr.node_name,'10006', org.name,'10004', usr.user_name,'???') as context
, pov.profile_option_value as value
, fnd_profile_option_values pov
, fnd_user usr
, fnd_application app
, fnd_responsibility rsp
, fnd_nodes svr
, hr_operating_units org
AND pov.application_id = po.application_id
AND pov.profile_option_id = po.profile_option_id
AND usr.user_id = pov.level_value
AND rsp.application_id = pov.level_value_application_id
AND rsp.responsibility_id = pov.level_value
AND app.application_id = pov.level_value
AND svr.node_id = pov.level_value
and org.organization_id = pov.level_value
and pov.profile_option_value like '%isupport%';
Of course I can't query the same profile values from front end at server level to confirm, as we no more have the server(Node2) entry after the node eviction.
Dangling profile options are no issue. The only time you would have to change your URL based profile options is if you changed your reverse proxy entry point to something other than isupport. How did you direct traffic between Node1 and Node2 in the DMZ?
Thanks Again Micheal, we have reverse proxy in place to deal with the traffic...