We had a similar kind of a requirement in our project where we had three different sources that we had to load into Endeca and we wrote custom CAS plugins to do the task. Write a Custom Crawl module that reads from the Webservice and populates the record store. You should be able to build a jar out of the custom classes and drop it into CAS plugins directory and write a custom crawl configuration to invoke the Crawl module and do the work for you.
Let me know if you need any help.
thanks for your valuable response. Do we need to install CAS on our machine ? I also heard about adapter in web studio called "spider". Do you have any idea about this spider adapter ?
Crawling is part of the CAS, which is kind of the data ingest process in Endeca as you would know. There are a bunch of OOTB Crawlers available like FileSystem based, XML Crawlers, etc., and there could be a case where you have to write your own and thats what I am suggesting because the XML crawl thats OOTB expects a specific format which isnt really mentioned in the Documentation. Please refer to CASDevGuide for more information. CAS, PlatformServices and ToolsAndFrameworks should be running when you start off with Endeca, you can see that from
ps -ef | grep java
on your machine