这是一项家庭作业。我有两个已经给出的文件。 Client class
和SharedValues Interface
。这里是描述:我想实现一个资源处理程序类。它必须是无死锁和无饥饿
我需要编写一个class
(资源处理程序),其中包含static interfaces
并管理“n”数量的资源分配及其对“k”数量的客户端的调度。客户必须只有two operations
。 Reading and writing
与no deadlock and no starvation
。如果一个资源被分配用于写入,则其他客户端不能将其用于任何其他目的。如果资源被分配用于阅读,那么只有读者可以得到它,作者不会。一个密钥可以使资源免费分配。资源的参考用String
(名称)表示。
Resource Handler
必须包含two interfaces
为客户。 getLock()
和releaseLock()
。 getLock()
接口所需的参数是一个对象(Set<String>)
,其中放置了资源名称和所需操作(boolean: true - writing, false - reading)
,返回值为identifier (long)
。在资源处理程序无法将请求的资源提供给cleint之前,资源处理程序应在getLock()
调用中阻止当前客户机。当所请求的资源可用于给定的操作时,它将再次解锁。 releaseLock()
的返回值是void
,它的必需参数是它在getLock()
调用中获得的标识符。客户端正在从资源处理器类(通过getLock()
接口)请求资源的锁定/释放作为资源的子集,并且客户端正在通过收到的标识符释放资源。 (通过releaseLock()
接口)。
我不是亲在Java中,我只有一点多线程的经验。请考虑。
考虑下面的类和接口:
SharedValues接口
public interface SharedValues
{
//resources
public final static String[] RESOURCE_LIST = new String[]{ "a", "b", "c", "d", "e", "f", "g", "h" };
//constant of the client's writing method type
public final static boolean WRITE_METHOD = true;
//constant of the client's reading method type
public final static boolean READ_METHOD = false;
//constant for the number of clients
public final static int CLIENTNUM = 5;
//minimum wait time of the client
public final static int CLIENT_HOLD_MINIMUM = 1000;
//maximum wait time difference of the client
public final static int CLIENT_HOLD_DIFF = 1000;
//time limit of the clients
public final static int RUNTIME = 20000;
}
Client类
import java.util.Arrays;
import java.util.Collections;
import java.util.Date;
import java.util.HashSet;
import java.util.Random;
import java.util.Set;
import java.util.ArrayList;
//start and implementation of the client
public class Client extends Thread implements SharedValues
{
//used random for creating clients
private static Random mRandom = new Random();
//for stoping flag
private boolean mRunning = true;
//used method in client
private boolean mMethod = true;
//the clients want to lock these resources
private Set<String> mNeededRes = new HashSet<String>();
//received identifier for the releasing cleint's resources
private long mLockID = -1;
//client's logging name
private String mLogName = null;
//client's constructor
public Client(String[] xResList, boolean xMethod, int xClientID)
{
super("Client_" + xClientID);
mLogName = "Client_" + xClientID;
mMethod = xMethod;
for (int i = 0; i < xResList.length; i++)
{
mNeededRes.add(xResList[ i ]);
}
}
//interface for logging
private void log(String xMessage)
{
System.out.println(new Date() + " " + mLogName + ": " + xMessage);
}
//holding resources or sleeping
private synchronized void holdResources()
{
if (!mRunning)
{
return;
}
//sleeping a value what is in the intervall
try
{
wait(mRandom.nextInt(CLIENT_HOLD_DIFF) & CLIENT_HOLD_MINIMUM);
}
catch (InterruptedException e)
{
log("Error: Resource allocating interrupted");
}
}
//for stopping interface
public synchronized void stopRunning() throws Exception
{
//changing the flag and interrupting the sleep if it has
if (mRunning)
{
mRunning = false;
wait();
}
else
{
log("Error: the client has already stopped!");
}
}
//Overloading thread method
public void run()
{
log("Started.");
while (mRunning)
{
log(((mMethod == WRITE_METHOD) ? "Writing" : "Reading") + " requested resources: "
+ toSortedSet(mNeededRes));
final long startTime = System.currentTimeMillis();
mLockID = ResHandler.getLock(mNeededRes, mMethod);
final long elapsed = System.currentTimeMillis() - startTime;
log(((mMethod == WRITE_METHOD) ? "Writing" : "Reading") + " received resources (" + elapsed
+ " ms): " + toSortedSet(mNeededRes) + ". Lock: " + mLockID);
holdResources();
ResHandler.releaseLock(mLockID);
holdResources();
}
log("Stopped.");
}
//creating clients
private static Client createClient(int xClientID)
{
final int resNum = mRandom.nextInt(RESOURCE_LIST.length) + 1;
//randomly take out one of all resources
final ArrayList<String> selectedRes = new ArrayList<String>(Arrays.asList(RESOURCE_LIST));
for (int i = 0; i < (RESOURCE_LIST.length - resNum); i++)
{
final int chosenRes = mRandom.nextInt(selectedRes.size());
selectedRes.remove(chosenRes);
}
final boolean method = mRandom.nextInt(5) <= 2;
return new Client((String[]) selectedRes.toArray(new String[]{}), method, xClientID);
}
//auxiliary interface for elements of subset, we can do sorted logging
private String toSortedSet(Set<String> xSet)
{
final StringBuffer tmpSB = new StringBuffer("{ ");
final String[] sortedRes = (String[]) xSet.toArray(new String[]{});
Arrays.sort(sortedRes);
for (int i = 0; i < sortedRes.length; i++)
{
tmpSB.append(sortedRes[ i ]).append(", ");
}
tmpSB.setLength(tmpSB.length() - 2);
tmpSB.append(" }");
return tmpSB.toString();
}
public static void main(String[] args) throws Exception
{
//keep the clients for stop
final Client[] clientArr = new Client[ CLIENTNUM ];
for (int i = 0; i < clientArr.length; i++)
{
clientArr[ i ] = createClient(i);
clientArr[ i ].start();
//the clients do not start at the same time
try
{
Thread.sleep(mRandom.nextInt(CLIENT_HOLD_MINIMUM));
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
//sleeping the running time of clients
try
{
Thread.sleep(RUNTIME);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
//stopping cleints
for (int i = 0; i < clientArr.length; i++)
{
clientArr[ i ].stopRunning();
try
{
clientArr[ i ].join();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
}
}
我写了这个至今。我看到客户端日志中的锁始终为0,经过的时间为0毫秒,但我不知道为什么。
资源Handler类
import java.util.Set;
class ResHandler {
private static long identifier;
public static long getLock(Set<String> mNeededRes, boolean mMethod) {
return identifier;
}
public static void releaseLock(long mLockID) {
}
这是输出:
Wed Oct 09 04:42:25 CEST 2013 Client_0: Started.
Wed Oct 09 04:42:25 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:25 CEST 2013 Client_0: Writing received resources (4 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_1: Started.
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_2: Started.
Wed Oct 09 04:42:26 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_2: Writing received resourcesk (0 ms): { a, b, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:27 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_3: Started.
Wed Oct 09 04:42:27 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:27 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_4: Started.
Wed Oct 09 04:42:27 CEST 2013 Client_4: Reading requested resources: { f, h }
Wed Oct 09 04:42:27 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:27 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_4: Reading requested resources: { f, h }
Wed Oct 09 04:42:28 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_2: Writing received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:29 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }
Wed Oct 09 04:42:29 CEST 2013 Client_2: Writing received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_4: Reading requested resources: { f, h }
Wed Oct 09 04:42:29 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:29 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
。 。 。
我发现在互联网上一半的解决方案: Resource manager with ReentrantLocks
public class ResHandler {
//ID-s of the granted resource lists
private static long lockNum = 0;
//Resources are identified by strings, each client has a list of demanded resources
//we store these when granted, along with an ID
private static ConcurrentHashMap<Long, Set<String>> usedResources
= new ConcurrentHashMap<Long, Set<String>>();
//We store a lock for each resource
private static ConcurrentHashMap<String, ReentrantReadWriteLock> resources
= new ConcurrentHashMap<String, ReentrantReadWriteLock>();
//Filling our resources map with the resources and their locks
static {
for (int i = 0; i < SharedValues.RESOURCE_LIST.length; ++i) {
String res = SharedValues.RESOURCE_LIST[i];
//Fair reentrant lock
ReentrantReadWriteLock lc = new ReentrantReadWriteLock(true);
resources.put(res, lc);
}
}
//We get a set of the required resources and the type of lock we have to use
public static long getLock(Set<String> mNeededRes, boolean mMethod) {
//!!!
if (mMethod == SharedValues.READ_METHOD) {
//We try to get the required resources
for (String mn : mNeededRes)
resources.get(mn).readLock().lock();
//After grandted, we put them in the usedResources map
++lockNum;
usedResources.put(lockNum, mNeededRes);
return lockNum;
}
//Same thing, but with write locks
else {
for (String mn : mNeededRes)
resources.get(mn).writeLock().lock();
++lockNum;
usedResources.put(lockNum, mNeededRes);
return lockNum;
}
}
//Releasing a set of locks by the set's ID
public static void releaseLock(long mLockID) {
if (!usedResources.containsKey(mLockID)) {
System.out.println("returned, no such key as: " + mLockID);
return;
}
Set<String> toBeReleased = usedResources.get(mLockID);
//Unlocking every lock from this set
for (String s : toBeReleased) {
if (resources.get(s).isWriteLockedByCurrentThread())
resources.get(s).writeLock().unlock();
else
resources.get(s).readLock().unlock();
}
//Deleting from the map
usedResources.remove(mLockID);
}
}
我想这和输出功率被改为这样:
Fri Oct 11 10:14:40 CEST 2013 Client_0: Started.
Fri Oct 11 10:14:40 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:40 CEST 2013 Client_0: Reading received resources (8 ms): { b, c, h }. Lock: 1
Fri Oct 11 10:14:40 CEST 2013 Client_1: Started.
Fri Oct 11 10:14:40 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:40 CEST 2013 Client_1: Reading received resources (1 ms): { a, b, c, d, f, g, h }. Lock: 2
Fri Oct 11 10:14:40 CEST 2013 Client_2: Started.
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 3
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 4
Fri Oct 11 10:14:41 CEST 2013 Client_3: Started.
Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:41 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:41 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing received resources (303 ms): { h }. Lock: 5
Fri Oct 11 10:14:41 CEST 2013 Client_1: Reading received resources (293 ms): { a, b, c, d, f, g, h }. Lock: 6
Fri Oct 11 10:14:41 CEST 2013 Client_0: Reading received resources (171 ms): { b, c, h }. Lock: 7
Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:41 CEST 2013 Client_4: Started.
Fri Oct 11 10:14:41 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
Fri Oct 11 10:14:42 CEST 2013 Client_3: Writing received resources (633 ms): { h }. Lock: 8
Fri Oct 11 10:14:42 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:42 CEST 2013 Client_4: Reading received resources (819 ms): { a, b, c, d, e, f, g, h }. Lock: 9
Fri Oct 11 10:14:42 CEST 2013 Client_2: Reading received resources (163 ms): { a, b, d, e, f, g, h }. Lock: 10
Fri Oct 11 10:14:42 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:42 CEST 2013 Client_1: Reading received resourcesk (0 ms): { a, b, c, d, f, g, h }. Lock: 11
Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading received resources (0 ms): { b, c, h }. Lock: 12
Fri Oct 11 10:14:42 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_3: Writing received resources (447 ms): { h }. Lock: 13
Fri Oct 11 10:14:43 CEST 2013 Client_0: Reading received resources (504 ms): { b, c, h }. Lock: 14
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading received resources (210 ms): { a, b, c, d, f, g, h }. Lock: 15
Fri Oct 11 10:14:43 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_4: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 16
Fri Oct 11 10:14:43 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 17
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading received resources (0 ms): { a, b, c, d, f, g, h }. Lock: 18
Fri Oct 11 10:14:44 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:44 CEST 2013 Client_3: Writing received resources (152 ms): { h }. Lock: 19
Fri Oct 11 10:14:44 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:44 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:44 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
Fri Oct 11 10:14:44 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:45 CEST 2013 Client_0: Reading received resources (504 ms): { b, c, h }. Lock: 21
Fri Oct 11 10:14:45 CEST 2013 Client_4: Reading received resources (399 ms): { a, b, c, d, e, f, g, h }. Lock: 22
Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading received resources (230 ms): { a, b, c, d, f, g, h }. Lock: 23
Fri Oct 11 10:14:45 CEST 2013 Client_2: Reading received resources (544 ms): { a, b, d, e, f, g, h }. Lock: 20
Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading received resources (0 ms): { a, b, c, d, f, g, h }. Lock: 24
Fri Oct 11 10:14:45 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:45 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:45 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:46 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
但这里的程序被冻结。我猜是因为僵局。
我的问题是:我该如何解决这个问题。如果有人能够向我展示一些有效的代码示例,我真的很感激。
嗯...它似乎有点中国的我:)你知道我试图写一个代码,但我做不到。多线程,锁定,同步和这些类型对我来说都是新的。问题是,有时候我只有在看到书面代码的时候才能学习,而且将来我会知道如何解决类似的问题:)但是,你给了我一些可用的建议,我会尝试自己编写代码。无论如何,谢谢! :) – user2821898
经过大量的反复试验后,我到达了一个与您的答案中列出的结构相匹配的实现(如果保持简单,这确实更容易 - 我花了一段时间才意识到这一点)。我想比较我的实现与其他实现。你知道我能找到他们吗?该实现是“餐饮哲学家问题”的一个通用解决方案,可以防止任何数量的使用任何(数量)资源/分支的哲学家造成死锁和饥饿。 – vanOekel
不是真的,没有:(我没有使用这样的机制,也无法回忆看到任何这样的实际代码,对不起。 –