Due to the need for so much memory, my preference during development would be to run oim_server1 and soa_server2. However, whenever the oim_server1 tries to reach the SOA instance, it always says:
<Nov 16, 2011 4:35:39 PM EST> <Error> <oracle.iam.tasklist.agentry.task> <BEA-000000> < javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3://SERVER1.mydomain.com,SERVER1.mydomain.com:8001/soa-infra: Destination unreachable; nested exception is:
java.net.ConnectException: Connection refused: connect; No available router to destination]>
How do i configure the cluster to know that it should be looking for the address t3://SERVER1.mydomain.com:8001/soa-infra,SERVER2.mydomain.com:8001/soa-infra ?
I tried configuring the cluster with comma separated address, but did not have any luck, it still always looks for the local address? Is this a configuration available in WebLogic, or do you think it's hard coded into the OIM application to always look for local?
For the cluster address, i filled in "10.241.110.105,10.241.110.106" and also updated the listen address on each of the soa_servers for their ip address. It still shows the same error when i don't have the soa server started on the same box. It also still uses the host name and not an ip address.
A colleague of mine was able to provide me with the answer. With the EM console, i was able to update the mbean containing the soa address and updating it with the cluster address. I also updated the ohs configuration on the http server to be aware of both instances.
Are you accessing the server from Windows or Linux?
- On Linux try editing the /etc/hosts file and add something like: 10.241.110.105 server1.etcetera
- On Windows try editing the C:\WINDOWS\system32\drivers\etc\hosts file and add something like: 10.241.110.105 server1.etcetera
When this works, contact your system administrator and ask him or her to map an ip to your hostname in DNS and DHCP
such that the servername automatically resolves the ip-address
It was never an issue of the name being resolved. It was an issue of the server not running on the local machine, but a different machine, and still in the cluster. It was always trying to use the local hostname instead of the cluster address. Now it's using a clustered address so it will be able to find the other machines instance if the local is not running.