A question: what you going to do if an unknown message comes in? And, if there are no unknown messages, why wouldn't you just build a WSDL with all the known messages an the Auth policy?
But suppose you do not want to do that for whatever reason.
What I would try to do:
1. Create a custom WSDL with a single operation A and the Auth policy in this WSDL
2. Assign this WSDL to a separate proxy
3. Forward the incoming request to that proxy
4. The proxy would apply the Auth policy, throwing a fault if user is not valid
5. If user is valid though, the proxy would just return the control back, leaving the body unchanged
Repeat per operation.
The existing configuration is as follows:
External Source System 1 -> OSB Proxy Service 1
External Source System 2 -> OSB Proxy Service 2
External Source System 3 -> OSB Proxy Service 3
Each proxy service above has its own WSDL and message access control.
The proposed design is something like below:
External Source System 1 -> OSB Gateway proxy service -> OSB Proxy Service 1
External Source System 2 -> OSB Gateway proxy service -> OSB Proxy Service 2
External Source System 3 -> OSB Gateway proxy service -> OSB Proxy Service 3
The purpose of OSB Gateway proxy is to perform all common tasks and thereby remove the need to perform it in individual proxy services. For example, it will do logging, collect data for performance monitoring, perform security etc. After performing the common tasks, the gateway proxy service will forward the message to individual proxy service (using dynamic routing based on the transport header SOAPAction).
The proposed configuration is working fine (if the message access control remains in individual proxy services). The problem we are facing is how can we move the authorization to Gateway proxy service?
Appreciate your effort in trying to answer this question.
At my current client, we have faced the same problem and implemented a similar design which nevertheless has important advantages.
The issue with Any SOAP entry proxy is not only in complexities of authentication. More importantly, a fine-grained resource (thread) management becomes impossible: entry proxy has one workmanager, one max threads constraint. If any single service behind entry proxy experiences an inflow of requests (due to a spike or an misconfigured client), it eats up the workmanager dry, and all the rest of the services become non-responsive too.
With that in mind, we have implemented the following schema. It is a bit more involved, but it serves us well for a few years already:
EntryProxy 1 -> Inbound Interceptor Proxy -> Proxy 1
EntryProxy 2 -> Inbound Interceptor Proxy -> Proxy 2
Entry proxy does nothing but forwards the request to Inbound Interceptor. Entry proxy though has its own WSDL, authentication and workmanager which allows for fine-grained control.
Another important aspect of an entry proxy is that it passes a custom header containing the destination name to the Interceptor, e.g. TargetURI="Paypal/ProxyService/Paypal".
Inbound Interceptor Proxy does all the logging, error handling and a few other common tasks.
Then, based on the passed header, the Interceptor proxy performs a dynamic route call to the given destination.
Yes, this design has one extra moving part -- an entry proxy -- but it a) works b) retains all control in our hands. The entry proxy is a very small item; when I need to make a new one, I just copy an existing one and replace the WSDL and the TargetURI value -- 30 seconds of work.
Hope that helps.
Thanks Vlad, appreciate the effort and time you spent.