这是一项家庭作业。我有两个已经给出的文件。 Client class
和SharedValues Interface
。这是描述:
我需要编写一个class
(资源处理程序),其中包含static interfaces
并管理“n”个资源分配及其对“k”个客户端的调度。客户端必须只有two operations
。 Reading and writing
no deadlock and no starvation
String
。如果为写入分配了一个资源,则其他客户端不能将其用于任何其他目的。如果资源被分配用于阅读,那么只有读者可以获得它,而不是作家。密钥可以使资源免于分配。资源的引用由Resource Handler
(名称)表示。
two interfaces
必须包含getLock()
个客户端。 releaseLock()
和getLock()
。 (Set<String>)
接口的必需参数是一个对象(boolean: true - writing, false - reading)
,其中放置了资源的名称和所需的操作identifier (long)
,返回值为getLock()
。在资源处理程序无法将所请求的资源提供给cleint之前,应该在releaseLock()
调用时阻止资源处理程序当前客户端。当请求的资源可用于给定操作时,它将再次取消阻止。 void
的返回值为getLock()
,其必需参数是getLock()
调用时获得的标识符。客户端从资源处理程序类(通过releaseLock()
接口)请求资源的锁定/释放作为资源的子集,并且客户端通过接收的标识符释放资源。 (通过 public interface SharedValues
{
//resources
public final static String[] RESOURCE_LIST = new String[]{ "a", "b", "c", "d", "e", "f", "g", "h" };
//constant of the client's writing method type
public final static boolean WRITE_METHOD = true;
//constant of the client's reading method type
public final static boolean READ_METHOD = false;
//constant for the number of clients
public final static int CLIENTNUM = 5;
//minimum wait time of the client
public final static int CLIENT_HOLD_MINIMUM = 1000;
//maximum wait time difference of the client
public final static int CLIENT_HOLD_DIFF = 1000;
//time limit of the clients
public final static int RUNTIME = 20000;
}
界面)。
我不是Java专业人员,我在多线程方面只有一点经验。请考虑到。
给出以下类和接口:
SharedValues接口
import java.util.Arrays;
import java.util.Collections;
import java.util.Date;
import java.util.HashSet;
import java.util.Random;
import java.util.Set;
import java.util.ArrayList;
//start and implementation of the client
public class Client extends Thread implements SharedValues
{
//used random for creating clients
private static Random mRandom = new Random();
//for stoping flag
private boolean mRunning = true;
//used method in client
private boolean mMethod = true;
//the clients want to lock these resources
private Set<String> mNeededRes = new HashSet<String>();
//received identifier for the releasing cleint's resources
private long mLockID = -1;
//client's logging name
private String mLogName = null;
//client's constructor
public Client( String[] xResList, boolean xMethod, int xClientID )
{
super( "Client_" + xClientID );
mLogName = "Client_" + xClientID;
mMethod = xMethod;
for ( int i = 0; i < xResList.length; i++ )
{
mNeededRes.add( xResList[ i ] );
}
}
//interface for logging
private void log( String xMessage )
{
System.out.println( new Date() + " " + mLogName + ": " + xMessage );
}
//holding resources or sleeping
private synchronized void holdResources()
{
if ( !mRunning )
{
return;
}
//sleeping a value what is in the intervall
try
{
wait( mRandom.nextInt( CLIENT_HOLD_DIFF ) & CLIENT_HOLD_MINIMUM );
}
catch ( InterruptedException e )
{
log( "Error: Resource allocating interrupted" );
}
}
//for stopping interface
public synchronized void stopRunning() throws Exception
{
//changing the flag and interrupting the sleep if it has
if ( mRunning )
{
mRunning = false;
wait();
}
else
{
log( "Error: the client has already stopped!" );
}
}
//Overloading thread method
public void run()
{
log( "Started." );
while ( mRunning )
{
log( ( ( mMethod == WRITE_METHOD ) ? "Writing" : "Reading" ) + " requested resources: "
+ toSortedSet( mNeededRes ) );
final long startTime = System.currentTimeMillis();
mLockID = ResHandler.getLock( mNeededRes, mMethod );
final long elapsed = System.currentTimeMillis() - startTime;
log( ( ( mMethod == WRITE_METHOD ) ? "Writing" : "Reading" ) + " received resources (" + elapsed
+ " ms): " + toSortedSet( mNeededRes ) + ". Lock: " + mLockID );
holdResources();
ResHandler.releaseLock( mLockID );
holdResources();
}
log( "Stopped." );
}
//creating clients
private static Client createClient( int xClientID )
{
final int resNum = mRandom.nextInt( RESOURCE_LIST.length ) + 1;
//randomly take out one of all resources
final ArrayList<String> selectedRes = new ArrayList<String>( Arrays.asList( RESOURCE_LIST ) );
for ( int i = 0; i < ( RESOURCE_LIST.length - resNum ); i++ )
{
final int chosenRes = mRandom.nextInt( selectedRes.size() );
selectedRes.remove( chosenRes );
}
final boolean method = mRandom.nextInt( 5 ) <= 2;
return new Client( ( String[] ) selectedRes.toArray( new String[]{} ), method, xClientID );
}
//auxiliary interface for elements of subset, we can do sorted logging
private String toSortedSet( Set<String> xSet )
{
final StringBuffer tmpSB = new StringBuffer( "{ " );
final String[] sortedRes = ( String[] ) xSet.toArray( new String[]{} );
Arrays.sort( sortedRes );
for ( int i = 0; i < sortedRes.length; i++ )
{
tmpSB.append( sortedRes[ i ] ).append( ", " );
}
tmpSB.setLength( tmpSB.length() - 2 );
tmpSB.append( " }" );
return tmpSB.toString();
}
public static void main( String[] args ) throws Exception
{
//keep the clients for stop
final Client[] clientArr = new Client[ CLIENTNUM ];
for ( int i = 0; i < clientArr.length; i++ )
{
clientArr[ i ] = createClient( i );
clientArr[ i ].start();
//the clients do not start at the same time
try
{
Thread.sleep( mRandom.nextInt( CLIENT_HOLD_MINIMUM ) );
}
catch ( InterruptedException e )
{
e.printStackTrace();
}
}
//sleeping the running time of clients
try
{
Thread.sleep( RUNTIME );
}
catch ( InterruptedException e )
{
e.printStackTrace();
}
//stopping cleints
for ( int i = 0; i < clientArr.length; i++ )
{
clientArr[ i ].stopRunning();
try
{
clientArr[ i ].join();
}
catch ( InterruptedException e )
{
e.printStackTrace();
}
}
}
}
客户端类
import java.util.Set;
class ResHandler {
private static long identifier;
public static long getLock(Set<String> mNeededRes, boolean mMethod) {
return identifier;
}
public static void releaseLock(long mLockID) {
}
到目前为止我写了这个。我看到客户端日志中Lock始终为0,经过时间为0ms,但我不知道为什么。
资源处理程序类
Wed Oct 09 04:42:25 CEST 2013 Client_0: Started.
Wed Oct 09 04:42:25 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:25 CEST 2013 Client_0: Writing received resources (4 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_1: Started.
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:26 CEST 2013 Client_2: Started.
Wed Oct 09 04:42:26 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }
Wed Oct 09 04:42:26 CEST 2013 Client_2: Writing received resourcesk (0 ms): { a, b, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:27 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_3: Started.
Wed Oct 09 04:42:27 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:27 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_4: Started.
Wed Oct 09 04:42:27 CEST 2013 Client_4: Reading requested resources: { f, h }
Wed Oct 09 04:42:27 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0
Wed Oct 09 04:42:27 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:27 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_4: Reading requested resources: { f, h }
Wed Oct 09 04:42:28 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_2: Writing received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:28 CEST 2013 Client_3: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_1: Writing requested resources: { a, b, c, d, e, f, g, h }
Wed Oct 09 04:42:29 CEST 2013 Client_1: Writing received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_2: Writing requested resources: { a, b, d, e, f, g, h }
Wed Oct 09 04:42:29 CEST 2013 Client_2: Writing received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_4: Reading requested resources: { f, h }
Wed Oct 09 04:42:29 CEST 2013 Client_4: Reading received resources (0 ms): { f, h }. Lock: 0
Wed Oct 09 04:42:29 CEST 2013 Client_0: Writing requested resources: { b, c, d, g, h }
Wed Oct 09 04:42:29 CEST 2013 Client_0: Writing received resources (0 ms): { b, c, d, g, h }. Lock: 0
这是输出:
public class ResHandler {
//ID-s of the granted resource lists
private static long lockNum = 0;
//Resources are identified by strings, each client has a list of demanded resources
//we store these when granted, along with an ID
private static ConcurrentHashMap<Long, Set<String>> usedResources
= new ConcurrentHashMap<Long, Set<String>>();
//We store a lock for each resource
private static ConcurrentHashMap<String, ReentrantReadWriteLock> resources
= new ConcurrentHashMap<String, ReentrantReadWriteLock>();
//Filling our resources map with the resources and their locks
static {
for (int i = 0; i < SharedValues.RESOURCE_LIST.length; ++i) {
String res = SharedValues.RESOURCE_LIST[i];
//Fair reentrant lock
ReentrantReadWriteLock lc = new ReentrantReadWriteLock(true);
resources.put(res, lc);
}
}
//We get a set of the required resources and the type of lock we have to use
public static long getLock(Set<String> mNeededRes, boolean mMethod) {
//!!!
if (mMethod == SharedValues.READ_METHOD) {
//We try to get the required resources
for (String mn : mNeededRes)
resources.get(mn).readLock().lock();
//After grandted, we put them in the usedResources map
++lockNum;
usedResources.put(lockNum, mNeededRes);
return lockNum;
}
//Same thing, but with write locks
else {
for (String mn : mNeededRes)
resources.get(mn).writeLock().lock();
++lockNum;
usedResources.put(lockNum, mNeededRes);
return lockNum;
}
}
//Releasing a set of locks by the set's ID
public static void releaseLock(long mLockID) {
if (!usedResources.containsKey(mLockID)) {
System.out.println("returned, no such key as: " + mLockID);
return;
}
Set<String> toBeReleased = usedResources.get(mLockID);
//Unlocking every lock from this set
for (String s : toBeReleased) {
if (resources.get(s).isWriteLockedByCurrentThread())
resources.get(s).writeLock().unlock();
else
resources.get(s).readLock().unlock();
}
//Deleting from the map
usedResources.remove(mLockID);
}
}
。 。
我在互联网上找到了一半的解决方案: Resource manager with ReentrantLocks
Fri Oct 11 10:14:40 CEST 2013 Client_0: Started.
Fri Oct 11 10:14:40 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:40 CEST 2013 Client_0: Reading received resources (8 ms): { b, c, h }. Lock: 1
Fri Oct 11 10:14:40 CEST 2013 Client_1: Started.
Fri Oct 11 10:14:40 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:40 CEST 2013 Client_1: Reading received resources (1 ms): { a, b, c, d, f, g, h }. Lock: 2
Fri Oct 11 10:14:40 CEST 2013 Client_2: Started.
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 3
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:40 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 4
Fri Oct 11 10:14:41 CEST 2013 Client_3: Started.
Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:41 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:41 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing received resources (303 ms): { h }. Lock: 5
Fri Oct 11 10:14:41 CEST 2013 Client_1: Reading received resources (293 ms): { a, b, c, d, f, g, h }. Lock: 6
Fri Oct 11 10:14:41 CEST 2013 Client_0: Reading received resources (171 ms): { b, c, h }. Lock: 7
Fri Oct 11 10:14:41 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:41 CEST 2013 Client_4: Started.
Fri Oct 11 10:14:41 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
Fri Oct 11 10:14:42 CEST 2013 Client_3: Writing received resources (633 ms): { h }. Lock: 8
Fri Oct 11 10:14:42 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:42 CEST 2013 Client_4: Reading received resources (819 ms): { a, b, c, d, e, f, g, h }. Lock: 9
Fri Oct 11 10:14:42 CEST 2013 Client_2: Reading received resources (163 ms): { a, b, d, e, f, g, h }. Lock: 10
Fri Oct 11 10:14:42 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:42 CEST 2013 Client_1: Reading received resourcesk (0 ms): { a, b, c, d, f, g, h }. Lock: 11
Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading received resources (0 ms): { b, c, h }. Lock: 12
Fri Oct 11 10:14:42 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:42 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_3: Writing received resources (447 ms): { h }. Lock: 13
Fri Oct 11 10:14:43 CEST 2013 Client_0: Reading received resources (504 ms): { b, c, h }. Lock: 14
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading received resources (210 ms): { a, b, c, d, f, g, h }. Lock: 15
Fri Oct 11 10:14:43 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_4: Reading received resources (0 ms): { a, b, c, d, e, f, g, h }. Lock: 16
Fri Oct 11 10:14:43 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_2: Reading received resources (0 ms): { a, b, d, e, f, g, h }. Lock: 17
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:43 CEST 2013 Client_1: Reading received resources (0 ms): { a, b, c, d, f, g, h }. Lock: 18
Fri Oct 11 10:14:44 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:44 CEST 2013 Client_3: Writing received resources (152 ms): { h }. Lock: 19
Fri Oct 11 10:14:44 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:44 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:44 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
Fri Oct 11 10:14:44 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:45 CEST 2013 Client_0: Reading received resources (504 ms): { b, c, h }. Lock: 21
Fri Oct 11 10:14:45 CEST 2013 Client_4: Reading received resources (399 ms): { a, b, c, d, e, f, g, h }. Lock: 22
Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading received resources (230 ms): { a, b, c, d, f, g, h }. Lock: 23
Fri Oct 11 10:14:45 CEST 2013 Client_2: Reading received resources (544 ms): { a, b, d, e, f, g, h }. Lock: 20
Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading requested resources: { a, b, c, d, f, g, h }
Fri Oct 11 10:14:45 CEST 2013 Client_1: Reading received resources (0 ms): { a, b, c, d, f, g, h }. Lock: 24
Fri Oct 11 10:14:45 CEST 2013 Client_3: Writing requested resources: { h }
Fri Oct 11 10:14:45 CEST 2013 Client_2: Reading requested resources: { a, b, d, e, f, g, h }
Fri Oct 11 10:14:45 CEST 2013 Client_0: Reading requested resources: { b, c, h }
Fri Oct 11 10:14:46 CEST 2013 Client_4: Reading requested resources: { a, b, c, d, e, f, g, h }
我试过了,输出更改为:
{{1}}
但这里程序被冻结了。我猜是因为死锁。
我的问题是:我该如何解决这个问题。如果有人能告诉我一些有用的代码示例,我真的很赞赏。
答案 0 :(得分:2)
尝试通过循环释放锁来获取请求的锁将不可避免地导致死锁。除非所需的整套设备可用,否则任何客户都不能获得任何锁定。
一个解决方案:
仅使用一个LOCK,允许一个客户端一次访问控制对空闲/已分配集的访问的“关键部分”,该算法用于检查所需的所有“锁定”是否可用并且用于释放'锁。如果客户端输入此临界区的要求无法立即完全满足,则创建一个事件/信号量以等待,将其需求和事件/信号量存储在容器中(在此处生成ID以便数据可以是在发布时再次查找),离开临界区并等待事件/信号量,因此阻止它而不给它任何锁定。当客户端进入关键部分以释放锁定时,使用ID在容器中查找其数据,将其已分配的资源标记为空闲,将其从容器中删除,然后迭代容器以查找现在可以获取所有他们的任何阻止的客户端请求锁。如果找到一个,将其锁定标记为已分配,保留临界区并发出客户端事件/信号量的信号,因此允许它在分配了所有锁的情况下运行。
使用复杂锁定方案的技巧不使用复杂的锁定方案:)
你可以编写代码 - 毕竟它是作业:)
PS - 饥饿。您可以以任何方式实施反饥饿。一种方法是在释放资源时以及在迭代容器之前“轮换”容器条目,以寻找可运行的客户端。这样,所有客户最终都有机会首先查找他们所需的资源。