11 Replies Latest reply: Oct 3, 2012 4:20 AM by SunilN RSS

    How it is possible to reflect workbench changes on clustered environment

    SunilN
      Hi All,

      I am running endeca on MachineA and MachineB with separate MDEX engine.
      Cluster dgraph is implemented on MachineA with MachineB, thus data will updated on machineB when i run baseline update on MachineA.
      I have installed Experience Manager on MachineA and created some pages using Workbench.
      The rules are getting fired for MachineA without baseline update, but i noticed that same are not working for MachineB even though both machines are in cluster.
      When i run baseline update on MachineA rules are working with MachineB.


      How it is possible to reflect workbench changes on both clustered MDEX engine without running baseline update.

      Please share your suggestion.

      Thanks in Advance,
      SunilN
        • 1. Re: How it is possible to reflect workbench changes on clustered environment
          TimK
          Hi Sunil,
          Have you tried Config Update? Either by script in the control directory or in Workbench (EAC Admin -> Admin Console -> Scripts - Config Update).
          -Tim
          • 2. Re: How it is possible to reflect workbench changes on clustered environment
            953835
            Try this:
            <path>/control/runcommand.sh --update-definition                                                                                                                                                                                                       
            • 3. Re: How it is possible to reflect workbench changes on clustered environment
              SunilN
              Hi Guys,

              I have tried to both approaches which you have suggested me.
              But still the rules are not fired for MachineB.

              I have tested it endeca_jspref on MachineB it the rules are not getting reflected for MachineB.
              Below is my MachineA AppConfig.xml file :

              <?xml version="1.0" encoding="UTF-8"?>
              <!--
              ##########################################################################
              # This file contains settings for an EAC application.
              #
              -->
              <spr:beans xmlns:spr="http://www.springframework.org/schema/beans"
              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xmlns:tx="http://www.springframework.org/schema/tx"
              xmlns:aop="http://www.springframework.org/schema/aop"
              xmlns="http://www.endeca.com/schema/eacToolkit"
              xsi:schemaLocation="
              http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd
              http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
              http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd
              http://www.endeca.com/schema/eacToolkit http://www.endeca.com/schema/eacToolkit/eacToolkit.xsd">

              <app appName="WineStore" eacHost="MachineA" eacPort="8888"
              dataPrefix="WineStore" sslEnabled="false" lockManager="LockManager">
              <working-dir>${ENDECA_PROJECT_DIR}</working-dir>
              <log-dir>./logs</log-dir>
              </app>


              <host id="ITLHost" hostName="MachineA" port="8888" />
              <host id="MDEXHost" hostName="MachineA" port="8888" />
              <host id="MDEXHost2" hostName="MachineB" port="8888" />
              <host id="webstudio" hostName="MachineA" port="8888" >
              <directories>
              <directory name="webstudio-report-dir">./reports</directory>
              </directories>
              </host>

              <lock-manager id="LockManager" releaseLocksOnFailure="true" />

              <script id="InitialSetup">
              <bean-shell-script>
              <![CDATA[
                  if (ConfigManager.isWebStudioEnabled()) {
                    log.info("Updating Oracle Endeca Workbench configuration...");
                    ConfigManager.updateWsConfig();
                    log.info("Finished updating Oracle Endeca Workbench.");
                  }
              ]]>
              </bean-shell-script>
              </script>

              <script id="BaselineUpdate">
              <log-dir>./logs/provisioned_scripts</log-dir>
              <provisioned-script-command>./control/baseline_update.bat</provisioned-script-command>
              <bean-shell-script>
              <![CDATA[
                  log.info("Starting baseline update script.");
                  // obtain lock
                  if (LockManager.acquireLock("update_lock")) {
                    // test if data is ready for processing
                    if (Forge.isDataReady()) {
                      if (ConfigManager.isWebStudioEnabled()) {
                        // get Web Studio config, merge with Dev Studio config
                        ConfigManager.downloadWsConfig();
                        ConfigManager.fetchMergedConfig();
                      } else {
              ConfigManager.fetchDsConfig();
              }

              // clean directories
              Forge.cleanDirs();
              PartialForge.cleanCumulativePartials();
              Dgidx.cleanDirs();

              // fetch extracted data files to forge input
              Forge.getIncomingData();
              LockManager.removeFlag("baseline_data_ready");

              // fetch config files to forge input
              Forge.getConfig();

              // archive logs and run ITL
              Forge.archiveLogDir();
              Forge.run();
              Dgidx.archiveLogDir();
              Dgidx.run();

              // distributed index, update Dgraphs
              DistributeIndexAndApply.run();

              // if Web Studio is integrated, update Web Studio with latest
              // dimension values
              if (ConfigManager.isWebStudioEnabled()) {
              ConfigManager.cleanDirs();
              Forge.getPostForgeDimensions();
              ConfigManager.updateWsDimensions();
              }

              // archive state files, index
              Forge.archiveState();
              Dgidx.archiveIndex();

              // (start or) cycle the LogServer
              LogServer.cycle();
              } else {
              log.warning("Baseline data not ready for processing.");
              }
              // release lock
              LockManager.releaseLock("update_lock");
              log.info("Baseline update script finished.");
              } else {
              log.warning("Failed to obtain lock.");
              }
              ]]>
              </bean-shell-script>
              </script>

              <script id="DistributeIndexAndApply">
              <bean-shell-script>
              <![CDATA[
                  DgraphCluster.cleanDirs();
                  DgraphCluster.copyIndexToDgraphServers();
                  DgraphCluster.applyIndex();
                    ]]>
              </bean-shell-script>
              </script>

              <script id="LoadXQueryModules">
              <bean-shell-script>
              <![CDATA[
                  DgraphCluster.cleanLocalXQueryDirs();
                  DgraphCluster.copyXQueryToDgraphServers();
                  DgraphCluster.reloadXqueryModules();
                    ]]>
              </bean-shell-script>
              </script>

              <script id="ConfigUpdate">
              <log-dir>./logs/provisioned_scripts</log-dir>
              <provisioned-script-command>./control/runcommand.bat ConfigUpdate run</provisioned-script-command>
              <bean-shell-script>
              <![CDATA[
                  log.info("Starting dgraph config update script.");
                  if (ConfigManager.isWebStudioEnabled()) {
                    ConfigManager.downloadWsDgraphConfig();
                    DgraphCluster.cleanLocalDgraphConfigDirs();
                    DgraphCluster.copyDgraphConfigToDgraphServers();
                    DgraphCluster.applyConfigUpdate();
                  } else {
              log.warning("Web Studio integration is disabled. No action will be taken.");
              }
              log.info("Finished updating dgraph config.");
              ]]>
              </bean-shell-script>
              </script>

              <custom-component id="ConfigManager" host-id="ITLHost" class="com.endeca.soleng.eac.toolkit.component.ConfigManagerComponent">
              <properties>
              <property name="webStudioEnabled" value="true" />
              <property name="webStudioHost" value="MachineA" />
              <property name="webStudioPort" value="8006" />
              <property name="webStudioMaintainedFile1" value="thesaurus.xml" />
              <property name="webStudioMaintainedFile2" value="merch_rule_group_default.xml" />
              <property name="webStudioMaintainedFile3" value="merch_rule_group_default_redirects.xml" />
                   <property name="webStudioMaintainedFile4" value="merch_rule_group_MobilePages.xml"/>
                   <property name="webStudioMaintainedFile5" value="merch_rule_group_NavigationPages.xml"/>
                   <property name="webStudioMaintainedFile6" value="merch_rule_group_SearchPages.xml"/>
              </properties>
              <directories>
              <directory name="devStudioConfigDir">./config/pipeline</directory>
              <directory name="webStudioConfigDir">./data/web_studio/config</directory>
              <directory name="webStudioDgraphConfigDir">./data/web_studio/dgraph_config</directory>
              <directory name="mergedConfigDir">./data/complete_index_config</directory>
              <directory name="webStudioTempDir">./data/web_studio/temp</directory>
              </directories>
              </custom-component>

              <forge id="Forge" host-id="ITLHost">
              <properties>
              <property name="numStateBackups" value="10" />
              <property name="numLogBackups" value="10" />
              </properties>
              <directories>
              <directory name="incomingDataDir">./data/incoming</directory>
              <directory name="configDir">./data/complete_index_config</directory>
              <directory name="wsTempDir">./data/web_studio/temp</directory>
              </directories>
              <args>
              <arg>-vw</arg>
              </args>
              <log-dir>./logs/forges/Forge</log-dir>
              <input-dir>./data/processing</input-dir>
              <output-dir>./data/forge_output</output-dir>
              <state-dir>./data/state</state-dir>
              <temp-dir>./data/temp</temp-dir>
              <num-partitions>1</num-partitions>
              <pipeline-file>./data/processing/pipeline.epx</pipeline-file>
              </forge>

              <dgidx id="Dgidx" host-id="ITLHost">
              <properties>
              <property name="numLogBackups" value="10" />
              <property name="numIndexBackups" value="3" />
              </properties>
              <args>
              <arg>-v</arg>
              </args>
              <log-dir>./logs/dgidxs/Dgidx</log-dir>
              <input-dir>./data/forge_output</input-dir>
              <output-dir>./data/dgidx_output</output-dir>
              <temp-dir>./data/temp</temp-dir>
              <run-aspell>true</run-aspell>
              </dgidx>

              <dgraph-cluster id="DgraphCluster" getDataInParallel="true">
              <dgraph ref="Dgraph1" />
              <dgraph ref="Dgraph2" />
                   <dgraph ref="Dgraph3" />
              </dgraph-cluster>

              <dgraph-defaults>
              <properties>
              <property name="srcIndexDir" value="./data/dgidx_output" />
              <property name="srcIndexHostId" value="ITLHost" />
              <property name="srcPartialsDir" value="./data/partials/forge_output" />
              <property name="srcPartialsHostId" value="ITLHost" />
              <property name="srcCumulativePartialsDir" value="./data/partials/cumulative_partials" />
              <property name="srcCumulativePartialsHostId" value="ITLHost" />
              <property name="srcDgraphConfigDir" value="./data/web_studio/dgraph_config" />
              <property name="srcDgraphConfigHostId" value="ITLHost" />
              <property name="srcXQueryHostId" value="ITLHost" />
              <property name="srcXQueryDir" value="./config/lib/xquery" />
              <property name="numLogBackups" value="10" />
              <property name="shutdownTimeout" value="30" />
              <property name="numIdleSecondsAfterStop" value="0" />
              </properties>
              <directories>
              <directory name="localIndexDir">./data/dgraphs/local_dgraph_input</directory>
              <directory name="localCumulativePartialsDir">./data/dgraphs/local_cumulative_partials</directory>
              <directory name="localDgraphConfigDir">./data/dgraphs/local_dgraph_config</directory>
              <directory name="localXQueryDir">./data/dgraphs/local_xquery</directory>
              </directories>
              <args>
              <arg>--threads</arg>
              <arg>2</arg>
              <arg>--spl</arg>
              <arg>--dym</arg>
              <arg>--xquery_path</arg>
              <arg>./data/dgraphs/local_xquery</arg>
              </args>
              <startup-timeout>120</startup-timeout>
              </dgraph-defaults>

              <dgraph id="Dgraph1" host-id="MDEXHost" port="15000">
              <properties>
              <property name="restartGroup" value="A" />
              <property name="updateGroup" value="a" />
              </properties>
              <log-dir>./logs/dgraphs/Dgraph1</log-dir>
              <input-dir>./data/dgraphs/Dgraph1/dgraph_input</input-dir>
              <update-dir>./data/dgraphs/Dgraph1/dgraph_input/updates</update-dir>
              </dgraph>

              <dgraph id="Dgraph2" host-id="MDEXHost" port="15001">
              <properties>
              <property name="restartGroup" value="B" />
              <property name="updateGroup" value="a" />
              </properties>
              <log-dir>./logs/dgraphs/Dgraph2</log-dir>
              <input-dir>./data/dgraphs/Dgraph2/dgraph_input</input-dir>
              <update-dir>./data/dgraphs/Dgraph2/dgraph_input/updates</update-dir>
              </dgraph>

              <dgraph id="Dgraph3" host-id="MDEXHost2" port="15000">
              <properties>
              <property name="restartGroup" value="B" />
              <property name="updateGroup" value="a" />
              </properties>
              <log-dir>./logs/dgraphs/Dgraph3</log-dir>
              <input-dir>./data/dgraphs/Dgraph3/dgraph_input</input-dir>
              <update-dir>./data/dgraphs/Dgraph3/dgraph_input/updates</update-dir>
              </dgraph>

              </spr:beans>

              Do i need to change any things else.

              Please suggest me.

              Thanks
              SunilN
              • 4. Re: How it is possible to reflect workbench changes on clustered environment
                TimK
                Hi Sunil,
                At first glance, your appconfig looks ok to me. I'm sure you know all of the following, but here's where I'd start:
                I'll assume that your changes are in one of the defined webStudioMaintainedFileX files.
                What's in the dgraph log on MDEXHost2?
                Is there anything in the ConfigUpdate log files on the ITL server after you run that script?
                -Tim
                • 5. Re: How it is possible to reflect workbench changes on clustered environment
                  sean horgan - oracle
                  Hi Sunil,
                  What version of workbench are you using?

                  Sean
                  • 6. Re: How it is possible to reflect workbench changes on clustered environment
                    SunilN
                    Hi Tim,

                    I an getting below Dgraph logs on MDEXHost2

                    Errors detected during config update:
                    Unknown style "Style 1" specified in STYLE_NAME attribute specified for MERCH_RULE element with id 1. Ignoring this MERCH_RULE element.
                    Unknown style "Style 1" specified in STYLE_NAME attribute specified for MERCH_RULE element with id 4. Ignoring this MERCH_RULE element.
                    Unknown style "Style 1" specified in STYLE_NAME attribute specified for MERCH_RULE element with id 6. Ignoring this MERCH_RULE element.
                    Unknown style "Style 2" specified in STYLE_NAME attribute specified for MERCH_RULE element with id 2. Ignoring this MERCH_RULE element.
                    Unknown style "Style 3" specified in STYLE_NAME attribute specified for MERCH_RULE element with id 5. Ignoring this MERCH_RULE element.

                    Unable to delete WineStore.merch_rule_group_default.xml during config update. Leaving on disk.
                    Unable to delete WineStore.merch_rule_group_default_redirects.xml during config update. Leaving on disk.

                    Any idea about this.

                    Thanks,
                    Sunill
                    • 7. Re: How it is possible to reflect workbench changes on clustered environment
                      SunilN
                      Hi Sean,

                      I am using 2.1.2 version of workbench.

                      Thanks,
                      • 8. Re: How it is possible to reflect workbench changes on clustered environment
                        SunilN
                        Hi Guys,

                        Any update on this.

                        Thanks,
                        Sunil
                        • 9. Re: How it is possible to reflect workbench changes on clustered environment
                          953835
                          ()

                          Edited by: EndecaJoe on Oct 1, 2012 6:53 AM
                          • 10. Re: How it is possible to reflect workbench changes on clustered environment
                            sabdelhalim
                            Hi !

                            maybe you should take a look at the Deployment Template Usage Guide => pages 71, 72 and 73 entitled Endeca Workbench deployed with a preview Dgraph

                            " This deployment approach calls for a new Dgraph to be configured in the production environment to
                            serve as the Workbench preview Dgraph.
                            This Dgraph is updated along with others in the production environment each time an update is
                            processed, but it does not serve traffic for the production application. Instead, this Dgraph is used for
                            preview by Endeca Workbench.
                            By default, Endeca Workbench uses an EAC script to update MDEX Engines with new configuration
                            each time the Save Changes button is pressed. With this deployment approach, the default functionality
                            would update each production Dgraph in addition to the preview Dgraph.There are two ways to override
                            this default behavior."

                            roughly speeking you can see how to declare a preview dgraph in addition to the one you are using along with you ex-workbench new Experience Manager
                            hope that helps
                            regards
                            Saleh
                            • 11. Re: How it is possible to reflect workbench changes on clustered environment
                              SunilN
                              Hi Saleh,

                              Thanks for update,
                              I have successfully implement this.

                              Thanks,
                              SunilN