2012-11-09 11 views
6

我正在尝试使用Hibernate Search,以便将所有从jgroupsSlave节点写入Lucene索引的节点发送到jgroupsMaster节点,然后将Lucene索引共享回奴隶与Infinispan。一切工作都在本地进行,但是当节点在EC2上彼此发现时,它们似乎并没有进行通信。EC2上的JGroups节点虽然互相看到对方,但不会说话

他们都互相发送你活着的消息。

# master output sample 
86522 [LockBreakingService,localCache,archlinux-37498] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 
86523 [LockBreakingService,LuceneIndexesLocking,archlinux-37498] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 
87449 [Timer-4,luceneCluster,archlinux-37498] DEBUG org.jgroups.protocols.FD - sending are-you-alive msg to archlinux-57950 (own address=archlinux-37498) 
87522 [LockBreakingService,localCache,archlinux-37498] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 
87523 [LockBreakingService,LuceneIndexesLocking,archlinux-37498] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 

# slave output sample 
85499 [LockBreakingService,localCache,archlinux-57950] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 
85503 [LockBreakingService,LuceneIndexesLocking,archlinux-57950] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 
86190 [Timer-3,luceneCluster,archlinux-57950] DEBUG org.jgroups.protocols.FD - sending are-you-alive msg to archlinux-37498 (own address=archlinux-57950) 
86499 [LockBreakingService,localCache,archlinux-57950] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 
86503 [LockBreakingService,LuceneIndexesLocking,archlinux-57950] DEBUG org.infinispan.transaction.TransactionTable - About to cleanup completed transaction. Initial size is 0 

安全组

我有两个罐子,一个主人,一个奴隶,是我对自己的EC2实例上运行。我可以从其他实例ping每个实例,它们都在同一个安全组中,它定义了组中的任何计算机之间的通信的以下规则。

为ICMP 65535的所有端口的TCP 0-65535的UDP

因此,我不认为这是一个安全组的配置问题。

hibernate.properties

# there is also a corresponding jgroupsSlave 
hibernate.search.default.worker.backend=jgroupsMaster 
hibernate.search.default.directory_provider = infinispan 
hibernate.search.infinispan.configuration_resourcename=infinispan.xml 
hibernate.search.default.data_cachename=localCache 
hibernate.search.default.metadata_cachename=localCache 

infinispan.xml

<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd" 
      xmlns="urn:infinispan:config:5.1"> 
    <global> 
     <transport clusterName="luceneCluster" transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"> 
      <properties> 
       <property name="configurationFile" value="jgroups-ec2.xml" /> 
      </properties> 
     </transport> 
    </global> 

    <default> 
     <invocationBatching enabled="true" /> 
     <clustering mode="repl"> 

     </clustering> 
    </default> 

    <!-- this is just so that each machine doesn't have to store the index 
     in memory --> 
    <namedCache name="localCache"> 
     <loaders passivation="false" preload="true" shared="false"> 
      <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false"> 
       <properties> 
        <property name="location" value="/tmp/infinspan/master" /> 
        <!-- there is a corresponding /tmp/infinispan/slave in 
        the slave config --> 
       </properties> 
      </loader> 
     </loaders> 
    </namedCache> 
</infinispan> 

的JGroups-ec2.xml

<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
     xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.2.xsd"> 
    <TCP 
      bind_addr="${jgroups.tcp.address:127.0.0.1}" 
      bind_port="${jgroups.tcp.port:7800}" 
      loopback="true" 
      port_range="30" 
      recv_buf_size="20000000" 
      send_buf_size="640000" 
      max_bundle_size="64000" 
      max_bundle_timeout="30" 
      enable_bundling="true" 
      use_send_queues="true" 
      sock_conn_timeout="300" 
      enable_diagnostics="false" 

      bundler_type="old" 

      thread_pool.enabled="true" 
      thread_pool.min_threads="2" 
      thread_pool.max_threads="30" 
      thread_pool.keep_alive_time="60000" 
      thread_pool.queue_enabled="false" 
      thread_pool.queue_max_size="100" 
      thread_pool.rejection_policy="Discard" 

      oob_thread_pool.enabled="true" 
      oob_thread_pool.min_threads="2" 
      oob_thread_pool.max_threads="30" 
      oob_thread_pool.keep_alive_time="60000" 
      oob_thread_pool.queue_enabled="false" 
      oob_thread_pool.queue_max_size="100" 
      oob_thread_pool.rejection_policy="Discard" 
      /> 
    <S3_PING secret_access_key="removed_for_stackoverflow" access_key="removed_for_stackoverflow" location="jgroups_ping" /> 

    <MERGE2 max_interval="30000" 
      min_interval="10000"/> 
    <FD_SOCK/> 
    <FD timeout="3000" max_tries="3"/> 
    <VERIFY_SUSPECT timeout="1500"/> 
    <pbcast.NAKACK2 
      use_mcast_xmit="false" 
      xmit_interval="1000" 
      xmit_table_num_rows="100" 
      xmit_table_msgs_per_row="10000" 
      xmit_table_max_compaction_time="10000" 
      max_msg_batch_size="100" 
      become_server_queue_size="0"/> 
    <UNICAST2 
      max_bytes="20M" 
      xmit_table_num_rows="20" 
      xmit_table_msgs_per_row="10000" 
      xmit_table_max_compaction_time="10000" 
      max_msg_batch_size="100"/> 
    <RSVP /> 
    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" 
        max_bytes="400000"/> 
    <pbcast.GMS print_local_addr="false" join_timeout="7000" view_bundling="true"/> 
    <UFC max_credits="2000000" min_threshold="0.10"/> 
    <MFC max_credits="2000000" min_threshold="0.10"/> 
    <FRAG2 frag_size="60000"/> 
</config> 

我直接从最新的infinispan核心发行版(5.2.0.Beta3,但我也尝试过5.1.4,我认为它)复制这个。我改变的唯一东西是用我的s3_ping替换掉了,但是我又看到了写入s3的节点,并且他们找到了对方,所以我不认为这是问题所在。我还开始使用他们的环境变量的主/从,将jgroups.tcp.address设置为他们的私有IP地址。我也尝试了一些非常简单但没有任何成功的配置。

任何想法可能是什么问题?我花了几天时间玩它,这让我疯狂。我认为它必须是与jgroups配置的东西,因为它在本地工作,并且只是不能在EC2上交谈。

任何其他信息,你们想帮助解决这个问题吗?

回答

6

您有两个JGroups通道正在启动,因此需要指定两个JGroups配置:一个用于Infinispan,一个用于后端工作人员通信。

无论是Infinispan还是jgroupsMaster都将使用它们的默认配置设置,除非您指定了一个,但默认情况下使用的多点传送不适用于EC2。

看起来你有一个正确的Infinispan索引配置设置,但你必须重新配置jgroupsMaster worker以使用S3_PING或JDBC_PING;它可能会在本地为您工作,因为默认配置能够使用多播自动发现对等点。

这种重复将由HSEARCH-882解决,我很期待它显著简化配置。

+1

如果可以通过互联网亲你,我会的。这是因为把“hibernate.search.services.jgroups.configurationFile = JGroups的-ec2.xml”到我的hibernate.properties文件一样简单。它的工作基于我的输出,但我也可以告诉它是有效的,因为在S3上创建了第二个文件夹,这在以前的运行中并不存在。再次感谢! – dustincg

+0

我很高兴听到! – Sanne

相关问题