TOP

ASN.1/Java Runtime Advanced Topics

Applies to: ASN.1/Java v8.7

Working with Huge Values

Similar to other universal classes, classes that represent huge values implement the functionality of AbstractData: their instances can be encoded, decoded, compared for equality, cloned, serialized, and deserialized. Unlike values expected to be in RAM, the contents of huge values are represented as special objects that denote generic sequences of uniform items (octets, characters or objects) implementing the Storage interface. Correspondingly, accessor and mutator methods of the universal classes return or accept ByteStorage, CharStorage or ObjectStorage instead of byte[], String/char[], or AbstractData[].

The Storage Interface

The Storage interface specifies a generic storage with sequential access. It provides methods that have the following characteristics:

  • Identify the stored item type (getKind).
  • Get the number of items in the storage (getSize).
  • Identify if the storage is mutable (canWrite).
  • Identify if the storage is readable (canRead).
  • Copy the storage (copy).
  • Empty the storage (reset).
  • Destroy the storage object when it is no longer needed (deallocate).

Depending on the type of the stored items, specific subinterfaces define how the application accesses or modifies the contents of the storage:

Storage
Is the superclass.
ByteStorage
Allows you to access and modify a byte [] array (for BIT STRING and OCTET STRING).
CharStorage
Allows you to access and modify a character array (for character strings).
ObjectStorage
Allows you to access and modify an array of objects (for SET OF and SEQUENCE OF).

The getReader() method provides the caller with an input stream that iteratively returns stored items. To modify the contents, the application writes items to the output stream returned by the getWriter() method. For example, an application can read or process the value of a huge OCTET STRING as follows:

public int process(HugeOctetString toBeSigned)
{
    // 1. Get the value as a ByteStorage object
    ByteStorage contents = toBeSigned.byteStorageValue();
    // 2. Get input stream to read content octets  InputStream reader = contents.getReader();
    try {
        // 3. Read content octets and compute the signature
        long length = contents.getSize();
        int signature = 0;
        for (long i=0; i<=length; i++) {
            int octet = reader.read();
            // Update the signature
            signature = update(signature, octet);
        }
    } finally {
        // 4. Complete read operation. Failure to call 'close()' will
        // make it impossible to write the contents until the 'reader'
        // is finalized by the garbage collector.
        reader.close();
    }
    // 5. Return the result to the caller
    return signature; 
}

To modify a huge value (for example, to fill the contents with a given pattern), follow these steps:

public void fill(HugeOctetString sensitive, int pattern)
{
    // 1. Get the value as a ByteStorage object and detemine
    //    its length
    ByteStorage contents = sensitive.byteStorageValue();
    long length = contents.getSize();
    // 2. Get output stream to fill the value with octets
    OutputStream writer = contents.getWriter(false);
    try {
        // 3. Fill the value with a given pattern
        for (long i=0; i<=length; i++)
            writer.write(pattern);
    } finally {
        // 4. Complete write operation. Failure to call 'close()' will
        // make it impossible to read the contents until the 'writer'
        // is finalized by the garbage collector.
        writer.close();
    }
}

NOTE: To zeroize sensitive data before disposal, invoke contents.reset(true) to overwrite the contents with zeros and to reset the item count to 0.

Storage allocation

To instantiate a huge value you must create a Storage object of the appropriate type. To instantiate the ByteStorage, CharStorage, and ObjectStorage interfaces, you must know the details of the implementing class (which public constructors are defined by the implementing class) or the existence of factory for the Storage objects. This type of factory is called the StorageManager.

There are two ways to instantiate a huge value:

  • Using the public constructors of the corresponding implementation of the Storage:
    HugeOctetString toBeSigned = new HugeOctetString(
        new OSSByteStorage());
    or
    File data = new File("tobesigned.dat");
    HugeOctetString toBeSigned = new HugeOctetString(
        new OSSByteStorage(data));
  • Using the allocate factory method of the class that implements the StorageManager interface:
    StorageManager storageManager = OSSStorageManager.getInstance();
    HugeOctetString toBeSigned = new HugeOctetString(
        storageManager.allocate(StorageManager.STORAGE_BYTES));

The runtime cannot know in advance which public constructors are available in the specific implementation of ByteStorage, CharStorage, or ObjectStorage, so it always uses the factory method when it needs to instantiate a huge value for itself. To invoke the factory method, an instance of the StorageManager is required. To provide the instance, use the setStorageManager method of the Coder class. Normally, the application calls this method to plug in custom implementations of Storage and StorageManager. By default, if the instance of StorageManager has not been passed to the runtime, the runtime performs its allocations using the instance of the OSSStorageManager defined in the com.oss.storage package.

See Also

Huge values and garbage collection

The instances that represent huge values have the usual life cycle of a Java object, namely, an instance is created, used by the application, and finally is recycled by the garbage collector. The Storage implementation uses external resources, for example, if the implementation uses disk storage, every Storage object has a disk file associated with it. Tracking the use of these external resources is the responsibility of the application code. The JVM can detect a shortage of heap space and run the garbage collector to reclaim the space occupied by unused objects, but it cannot take similar action if there is a shortage of native resources, such as socket handles, file handles, or directory entries. You should not rely on finalization and instead free external resources. The runtime offers the following tools to free external resources consumed by huge values:

  • The deallocate() method of Storage.
  • The delete() method of AbstractData.

The use of deallocate() is straightforward:

public void receive()
{
    Coder coder = MyProject.getBERCoder();
    try {
        HugeOctetString data = 
          (HugeOctetString)coder.decode(source, new HugeOctetString());
        // Process the data
        ...
        // The decoded 'data' is processed and is no longer needed.
        ByteStorage contents = data.byteStorageValue();
        if (contents != null) { 
            contents.deallocate();
            // Clear the reference to ByteStorage since deallocate()
            // makes the ByteStorage object unusable.
            data.setValue(null);
        }
    } catch (Exception e) {
        ...
    }
}

Alternatively, you can use the delete() method, especially when a huge value is deeply nested within the outermost PDU:

Nested DEFINITIONS ::= BEGIN
    Signed ::= SEQUENCE {
        signature BIT STRING,
        algorithm OBJECT IDENTIFIER,
        content CHOICE {
            binary SEQUENCE {
                fileName BMPString,
                octets OCTET STRING --<OBJHANDLE>-- 
            },
            text SEQUENCE {
                 encoding IA5String,
                 characters IA5String --<OBJHANDLE>--
            }
        }
    }
END

...
    
public void receive()
{
    Coder coder = MyProject.getBERCoder();
    try {
        Signed data = 
            (Signed)coder.decode(source, new Signed());
        // Process the data
        ...
        // The decoded 'data' is processed and is no longer needed.
        data.delete();
    } catch (Exception e) {
        ...
    }
}

The data.delete() traverses the Signed data tree and invokes deallocate() for every huge value it encounters.

OSS implementation of Storage and StorageManager

When an application does not set the storage manager, the runtime uses the default storage manager to allocate storage for huge values. The default storage manager is implemented by the com.oss.storage package. The classes and hierarchy are outlined below:

OSSStorageManager
OSSFileStorage
    OSSByteStorage
    OSSCharStorage
    OSSObjectStorage

The implementation uses disk files to store the contents of huge values. Each storage class defines two constructors: a default constructor and a constructor with the File argument. The latter form is useful when you need to associate a storage object with existing data, for example, when you have an executable file and need to sign it.

Example

ASN.1 Java
SampleSigning DEFINITIONS ::= BEGIN
    SignedExecutable ::= SEQUENCE {
        hash INTEGER --<HUGE>--,
        name UTF8String,
        code OCTET STRING --<OBJHANDLE>--
    }
END
// Sign and DER encode an existing executable
HugeOctetString unsignedCode = new HugeOctetString(
    new OSSByteStorage(new File("myprog.exe")));
BigInteger hash = computeHash(unsignedCode);
SignedExecutable signed = new SignedExecutable(
    new HugeInteger(hash),
    new UTF8String16("myprog.exe"),
    unsignedCode);
// Get DER coder and create DER encoding of signed
// executable
Coder coder = MyProject.getDERCoder();
OutputStream sink = new FileOutputStream("myprog.der");
coder.encode(signed, sink);
signed.delete();

NOTE: The deallocate() method in this implementation never attempts to delete the disk file of the storage object.

The default constructor is used to create scratch storage objects that will be filled by the application itself:

// Encrypt data file and place the encrypted data into a 
// HugeOctetString
FileInputStream file = new FileInputStream("unencrypted.dat");
// Create ByteStorage to receive the encrypted data
ByteStorage encrypted = new OSSByteStorage();
OutputStream writer = encrypted.getWriter();
byte[] block = new byte[2048];
try {
    while (file.available()) {
        // Read blocks of data from the input file, encrypt
        // every block and put it into ByteStorage
        int count = file.read(block);
        encrypt(block, count);
        writer.write(block);
    }
} finally {
    writer.close();
    file.close();
}
HugeOctetString encryptedData = new HugeOctetString(encrypted);
// Get DER coder and encode encrypted data
Coder coder = MyProject.getDERCoder();
OutputStream sink = new FileOutputStream("encrypted.der");
coder.encode(encryptedData, sink);
encryptedData.delete();

Alternatively, you can get a scratch storage object by using the allocate() factory method of StorageManager:

// Encrypt data file and place the encrypted data into a HugeOctetString
FileInputStream file = new FileInputStream("unencrypted.dat");
StorageManager storageManager = OSSStorageManager.getInstance();
// Create a ByteStorage to receive the encrypted data
ByteStorage encrypted = 
    storageManager.allocate(StorageManager.STORAGE_BYTES);
...

Since the scratch storage object needs a disk file to store the data, the implementation automatically creates a temporary file for that purpose. By default, this file is created in the system dependent default temporary-file directory, which is specified by the java.io.tmpdir system property, called oss12345.tmp. The OSSStorageManager defines a number of methods that allow customization of these defaults (setWorkingDirectory, setPrefix, setSuffix).

To delete the temporary file, you can:

  • Invoke the deallocate() method on the Storage.
  • Invoke the delete() method of AbstractData, which internally invokes deallocate().
  • Wait until the storage object is reclaimed by the garbage collector (the implementation of finalize() also invokes deallocate()).

If your application fails to deallocate every scratch storage object it has created, but relies on finalization, some temporary files could remain undeleted after the application terminates. To prevent your application from leaving unreferenced objects, you can set the storage manager to automatically delete temporary files on the termination of JVM (setDeleteFilesOnExit).

Scratch storage objects created by the default constructor or by the allocate() factory method, are readable and writable. A Storage object created for an existing disk file is writable if the disk file has write permission. If the disk file is read-only, the storage object is read-only as well. When the Storage object is both readable and writable, extra care is taken to prevent simultaneous read and write access to the contents of the storage, namely:

  • After the getReader() method was called for the Storage object, any successive invocation of the canWrite() will return false and invocation of the getWriter() will throw a StorageException. Write access is re-enabled as soon as the close() method is invoked for the object returned by the first call to the getReader() method.
  • After the getWriter() method was called for the Storage object, either canRead() or canWrite() returns false and the invocation of both getWriter() and getReader() will throw a StorageException. Both read and write access is re-enabled as soon as the close() method is invoked for the object returned by the first call to the getWriter() method.

Example

OSSCharStorage chars = new OSSCharStorage();
// Both calls will return true
boolean readable = chars.canRead();
boolean writable = chars.canWrite();
Reader reader = chars.getReader();
// Next call will return 'false'. The modification of
// the storage is disabled until the call to 
// reader.close()
writable = chars.canWrite();
// Next line will throw an exception. Modification
// of the storage is not allowed when there is an
// active reader. Reader is deactivated when its
// close() method is called.
Writer anotherWriter = chars.getWriter(false);
or
OSSCharStorage chars = new OSSCharStorage();
// Both calls will return true
boolean readable = chars.canRead();
boolean writable = chars.canWrite();
Writer writer = chars.getWriter();
// Next calls will return 'false'. Writing sets
// exclusive access mode until the call to 
// writer.close()
readable = chars.canRead();
writable = chars.canWrite();
// Next lines will throw an exception. 
// No extra readers or writers can be obtained
// for the storage when there is an active writer. 
// Writer is deactivated when its close() method is 
// called.
Writer anotherWriter = chars.getWriter(false);
Reader reader = chars.getReader();

NOTE: The OSS implementation of ObjectStorage does not support the append operation. If you attempt to pass true to its getWriter() method, a StorageException will be thrown.

For your convenience, the source code for the com.oss.storage classes is available in the source.jar file, which is located in the ASN.1/Java Tools installation directory.

The following notes are useful when you use the ValueInFile directive and the Storage classes:

  • The constraint checker ignores any constraints imposed on the data that has ValueInFile applied.
  • DER does not support huge SET OF (an exception is thrown). But a SET that has huge unknown extensions (as well as a SET with components that are huge values) is supported by DER.
  • The ValueInFile feature is not supported by the CER, XER, and E-XER coders.
  • When decoding huge values, the huge value is copied from the encoding thus causing two copies: one in the input stream and the second in ByteStorage.

Customizing the Storage and the StorageManager

If the default implementation of the storage classes does not perfectly fit your application needs, the runtime provides an easy way to plug-in a customized version of Storage and StorageManager.

To customize the way the runtime handles huge values, follow these steps:

  1. Develop your custom implementations of ByteStorage, CharStorage, and ObjectStorage. Redefine only the classes whose behavior you wish to change. Your customized implementation can be a class as defined by the UserClass directive.
  2. Develop your own implementation of the StorageManager whose allocate() method will return storage objects you implemented in step 1.
  3. Plug your customized StorageManager into the runtime using the setStorageManager (com.oss.asn1.StorageManager) method of the Coder class.

Step 2 and 3 are optional. You implement 2 and 3 whenever you need the decoder to create customized storage objects instead of OSS storage objects.

Customizing with the UserClass directive

The UserClass directive and the ValueInFile directive used in combination are useful when you need to support post-processing of the decoded VALUEINFILE value.

The decoder checks if the component being decoded implements StorageManager. If it does, the decoder invokes allocate() defined by the component rather than allocate() of the StorageManager that was set via setStorageManager() of Coder.

The user class that you create in your application implements StorageManager and defines allocate() to return the appropriate ByteStorage implementation (that computes the digest while data is written to the OutputStream returned by ByteStorage.getWriter()).

In other words, at runtime, if the class of the value being decoded implements StorageManager, allocate() of that class is used to allocate the contents. Otherwise, the object that was set via the setStorageManager() method is used.

Examples

To avoid creating an extra large file, customize the OSSByteStorage class to encrypt data on the fly: implement the input stream that reads data from a disk file and encrypts it before returning to the caller. If you are using block cypher, the implementation resembles the BufferedInputStream.

public class CryptoInputStream extends InputStream {
    // Construct the stream providing the input file.
    public CryptoInputStream(File data);
    // Read the next block from the input file and encrypt it using
    // block cypher.
    private void fillBlock();
    // Read a single octet from the stream. If there are no octets
    // in the internal buffer,invoke fillBlock() to read the next block
    // of data.
    public int read();
    // Read octets from the stream, invoking fillBlock() as necessary.
    public int read(byte[] octets, int offset, int length);
}

Implement the ByteStorage whose getReader() method returns an instance of CryptoInputStream. You can do it by subclassing the OSSByteStorage. Assuming that a new storage object is used only as input to the encoder, it will be immutable (read-only):

public class CryptoByteStorage extends OSSByteStorage {
    // We do not implement a default constructor because it does
    // not make sense for an immutable object. Instead, we allow
    // creation of CryptoByteStorage for an existing file only.
    public CryptoByteStorage(File dataFile)
    {
        super(dataFile);
    }
    // The storage object is read-only. For this reason 
    // canWrite() returns false and both getWriter() and reset()
    // throw a StorageException.
    public boolean canWrite()
    {
        return false;
    }
    public OutputStream getWriter(boolean append)
    {
        throw new StorageException("This storage is read-only");
    }
    public void reset(boolean zeroize)
    {
        throw new StorageException("This storage is read-only");
    }
    // The getReader() method returns an instance of CryptoInputStream.
    // The CryptoInputStream reads input data and encrypts it before
    // returning to the caller.
    public InputStream getReader()
    {
        return new CryptoInputStream(mFile);
    }
}

Having the CryptoByteStorage class, you can rewrite the original example as follows:

// Encode binary data from a disk file doing data encryption 
// on the fly 
HugeOctetString encryptedData = 
    new HugeOctetString(
        new CryptoByteStorage(
            new File("unecrypted.dat")
        )
    );
// Get DER coder and encode encrypted data
Coder coder = MyProject.getDERCoder();
OutputStream sink = new FileOutputStream("encrypted.der");
coder.encode(encryptedData, sink);

NOTE: This example does not modify the StorageManager, because the customized storage object is used only as input to the encoder.

Customizing with BER indefinite length encoding

In this case, the length of the value to be encoded can be unknown. For this application, a custom ByteStorage or CharStorage class can be implemented, which indicates to the encoder that the value's length is unknown by returning 1 from the getSize() method.

The values of unknown length can be encoded using BER segmented indefinite length form encodings. When getSize() returns 1, and Storage is either ByteStorage or CharStorage, the InputStream returned by the ByteStorage.getReader() and the Reader returned by the CharStorage.getReader() should indicate the segment boundaries as follows:

  • The InputStream.available() tells you how many octets the next segment contains. available() == 0 indicates the value's end.
  • The Reader.ready() returns false on segment boundary. As soon as the encoder gets the return value of false from the Reader.ready(), it ends the current segment and is ready to start a new segment. If the Reader.ready() returns false on segment start, the value ends.

The following partial code illustrates a class that implements an indefinite length value represented by a ByteStorage class:

public class MyByteStorage implements com.oss.asn1.ByteStorage {

  protected InputStream contentProvider;

  public MyByteStorage(InputStream provider)
  {
    contentProvider = provider;
  }
  ...
  public InputStream getReader
  {
    return new ByteReader();
        }
        ...

        class ByteReader extends InputStream {
 
        // Internal buffer
        protected byte[] buffer = new byte[BLOCK_SIZE];
        // Number of octets in buffer
        protected int count = 0;
        // Current position in buffer
        protected int pos = 0;
 
        protected void fillBuffer
            throws IOException
        {
            count = contentProvider.read(buffer);
            pos = 0;
        }
 
        public int available()
            throws IOException
        {
            if (pos < count)
                return count - pos;
            fillBuffer();
            return count - pos;
        }
 
        public int read()
            throws IOException
        {
            if (pos < count)
                return buffer[pos++];
            fillBuffer();
            if (count == 0)
                return -1;
            return buffer[pos++];
        }
        ...
    }
 
}

The following notes are useful if you choose to develop your own implementation of the storage classes:

  • The clone() method of the universal class containing the storage object uses the clone() method of Storage to obtain a copy of the value.
  • The toString() method of AbstractData uses the toString() method of the Storage object to print the value. For example, OSS storage classes define the toString() that returns the name of the disk file associated with the storage object.
  • When you change the StorageManager to return your custom storage objects, keep in mind that the runtime can allocate ByteStorage objects for its internal use, for instance, for temporary storage that accumulates huge encodings when the BER definite length form of encoding is used. For this reason, you should avoid heavy data processing in the streams returned by getReader() and getWriter() whenever possible.
  • The runtime guarantees that once it gets an input or output stream by calling the getReader() or getWriter() method of the storage object, it calls the close() method for the returned stream. This feature facilitates the development of read and write locks in custom implementations of the storage classes.
  • To prevent the leak of external resources, the runtime guarantees that it deallocates every temporary storage object created by the runtime for its internal use.

A step-by-step example

The code (both Java and ASN.1) is available in the samples/advanced/vif subdirectory. The following code implements a simple utility for secure exchange of executable files. The sender takes an executable file, signs it, and sends it to the recipient. The recipient verifies the electronic signature and if the signature matches the contents, he saves the enclosed executable code for future use.

First, the message (PDU) is defined for the exchange in terms of ASN.1. An executable file has a name and executable code. It is defined as a SEQUENCE with two components. Because the executable code can be a huge size, the OBJHANDLE directive is applied to the code component of the Executable SEQUENCE.

To verify the signature, the recipient needs the signature itself, the name of the signer (to extract his public key from the key store) and the identification of the cryptographic algorithm the signer has used to sign the authenticated contents. Another SEQUENCE is defined with the fields that provide all this information (the SignedExecutable data type), that will be the top-level message (PDU). Note that the data component of the SignedExecutable has both the ASN1.DeferDecoding and the OBJHANDLE directives applied (the first directive is used for better performance). Otherwise, the sender would have to encode the data component twice: first, before computing the signature for the data, and then, when encoding the top-level message. The receiver would have to re-encode the data to check the validity of the signature (remember, the signature is computed for a DER encoding of the component rather than for the component itself). Since the data component has a huge value inside, the second directive is applied to indicate that the runtime should save the encoding of the deferred component in a storage object rather than in a byte[] array.

The signed.asn file contains the following:

--<ASN1.DeferDecoding SignedExe.SignedExecutable.data>--
SignedExe DEFINITIONS ::= BEGIN
    Executable ::= SEQUENCE {
        -- Name of the executable file
        name UTF8String,
        -- Executable code
        code OCTET STRING --<OBJHANDLE>--
    }
    SignedExecutable ::= SEQUENCE {
        -- Message digest, computed for DER encoding
        -- of the 'data' component
        encryptedDigest OCTET STRING,
        -- Identifies the signer
        signerID UTF8String,
        -- Identifies the digest algorithm
        digestID UTF8String,
        -- Data to be signed
        data Executable --<OBJHANDLE>--
    }
END

The utility is implemented using the stepwise refinement approach. At the very top is the ExeSigner class. Its main() recognizes the following command-line parameters:

  • A parameter that specifies whether the utility should sign an existing executable file or verify the signature for the file received.
  • If we are signing an existing executable file, the utility additionally needs the name of the file to be signed and the information about the signer (signer's name and the password to extract the signer's private key from key store).
  • If we are verifying the signature for the file received, the utility needs the name of the file that contains the DER encoding of the SignedExecutable message.
  • Additionally we will provide an optional verbose parameter that will instruct the utility to print progress messages.

The following syntax for the command-line:

java ExeSigner [-v] send file_to_sign signer's_name password

or

java ExeSigner [-v] receive file_to_verify

leads to the following implementation of the ExeSigner main class:

import com.oss.storage.*;
import com.oss.asn1.*;
import java.io.*;
import java.security.*;
import java.security.cert.*;
import signed.*;
import signed.signedexe.*;

public class ExeSigner {
    // Exit codes
    public final static int SUCCESS = 0;
    public final static int INIT_FAILED = -1;
    public final static int FEW_ARGS = 1;
    public final static int FEW_SEND_ARGS = 2;
    public final static int FEW_RECV_ARGS = 3;
    public final static int BAD_COMMAND = 4;
    public final static int SIGNING_FAILED = 6;
    public final static int EXTRACTION_FAILED = 7;
    // Enables verbose operation
    static boolean verbose = false;
    // Main executes the command, specified
    // in the command line.
    public static void main(String[] args)
    {
        int arg_count = args.length;
        int arg = 0;
        int rc = SUCCESS;
        try {
            Signed.initialize();
        } catch (Exception e) {
            System.out.println("Initialization failed: " + e);
            System.exit(INIT_FAILED);
        }
        if (arg_count < 1)
            rc = FEW_ARGS;
        else {
            rc = BAD_COMMAND;
            while (arg_count > 0) {
                if (args[arg].equals("-v")) {
                    verbose = true;
                    ++arg;
                    --arg_count;
                } else if (args[arg].equals("send")) {
                    if (arg_count < 4)
                        rc = FEW_SEND_ARGS;
                    else
                        // Sign the executable. The command-line
                        // parameters specify:
                        // args[1] - the name of the executable to sign.
                        // args[2] - the name of the signer. The signer
                        //           should have a private key defined in
                        //           local key store.
                        // args[3] - the password to get the private key
                        //           from the key store.
                        rc = send(args[arg+1], args[arg+2], args[arg+3]);
                    break;
                } else if (args[arg].equals("receive")) {
                    if (arg_count < 2)
                        rc = FEW_RECV_ARGS;
                    else
                        // Verify the file received. The command-line
                        // parameters specify:
                        // args[1] - the name of the file, that contains
                        //           DER encoded SignedExecutable message.
                        rc = receive(args[arg+1]);
                    break;
                } else {
                    System.out.println("Unrecognized command: " + args[arg]);
                    rc = BAD_COMMAND;
                    break;
                }
            }
        }
        Signed.deinitialize();
        if (rc > SUCCESS && rc < SIGNING_FAILED)
            usage();
        if (verbose)
            System.out.println(
                "The utility ran to completion. Result code = "
                    + rc + ".");
        System.exit(rc);
    }
    // Print informatory message, that explains the syntax of the
    // command-line
    public static void usage()
    {

        System.out.println("Usage:");
        System.out.println(
    "  java ExeSigner [-v] send <filename> <signer's name> <key password>");
        System.out.println("or");
        System.out.println("  java ExeSigner [-v] receive <filename>");
    }
    // Create an instance of SignedExecutable, sign its 'data' field,
    // and save the result DER encoding of the SignedExecutable to a
    // disk file.
    public static int send(String fileName, String signer, String password)
    {
        ...
    }
    // Verify received executable and if the signature is valid, save the
    // executable code into a disk file.
    public static int receive(String fileName)
    {
        ...
    }
}

To develop the send() method, follow the steps below:

  1. Instantiate a SignedExecutable object.
  2. Fill all the fields of this object except the encryptedDigest, which must be computed.
  3. Construct a DER encoding of the data field using the encodeData() method of the SignedExecutable class.
  4. Extract the private key of the signer from the key store, create the signature object, and compute an encrypted digest for the data.
  5. Assign this computed digest to the encryptedData component of SignedExecutable.
  6. Encode the SignedExecutable message and save the encoding into a disk file.
public static int send(String fileName, String signer, String password)
{
    SignedExecutable signedExecutable = null;
    try {
        if (verbose)
            System.out.println(
"***** Phase 1. Generating the SignedExecutable message ...");
        // 1. Create an instance of SignedExecutable. Note that
        // the constructor for the 'code' field uses the
        // OSSByteStorage(File) to associate the ByteStorage
        // object with the existing disk file.
        // Strip any directories from the 'filename'
        String exeName = new File(fileName).getName();
        signedExecutable = new SignedExecutable(
            new OctetString(),        // encryptedDigest
            new UTF8String16(signer), // signer's name
            new UTF8String16("DSA"),  // digest algorithm
            new Executable(           // data field
                new UTF8String16(         // executable name 
                    exeName
                ),
                new HugeOctetString(      // executable code
                    new OSSByteStorage(new File(fileName))
                )
            )
        );
        // 2. Generate DER encoding of the 'data' component
        Coder coder = Signed.getDERCoder();
        if (verbose) {
            System.out.println(
"***** Phase 2. Generating DER encoding of 'data' ...");
            coder.enableEncoderDebugging();
        }
        signedExecutable.encodeData(coder);
        if (verbose)
            System.out.println(
"***** Phase 3. Getting signer's private key & initializing the Signer ...");
        // 3. Get an instance of Signature object to compute the
        // encrypted digest for the 'data'
        KeyStore ks = KeyStore.getInstance("JKS");
        char[] c = password.toCharArray();
        // load .keystore
        ks.load(new FileInputStream(".keystore"), null);
        // read the Private key
        PrivateKey privateKey = (PrivateKey) ks.getKey(signer, c);
        Signature signature = Signature.getInstance("DSA");
        signature.initSign(privateKey);
        if (verbose)
            System.out.println(
"***** Phase 3a. Computing the digest ...");
        // Compute the encrypted digest for the 'data' field
        ByteStorage dataToSign = 
            signedExecutable.getEncodedData();
        // Allocate internal buffer to read octets from the
        // ByteStorage
        int blockSize = 1024;
        byte[] buffer = new byte[blockSize];
        InputStream reader = dataToSign.getReader();
        try {
            int len = -1;
            int octets = 0;
            // Read octets from the ByteStorage and compute
            // the encrypted digest.
            while ((len = reader.read(buffer)) != -1) {
                octets += len;
                signature.update(buffer, 0, len);
            }
            if (verbose)
                System.out.println(octets + " octets(s) processed.");
        } finally {
            reader.close();
        }
        // Get the encrypted digest computed
        byte[] digest = signature.sign();
        if (verbose)
            System.out.println(
"***** Phase 4. Adding the digest to SignedExecutable message ...");
        // 4. Assign the digest computed to the 'encryptedDigest' 
        // component
        signedExecutable.getEncryptedDigest().setValue(digest);
        if (verbose)
            System.out.println(
"***** Phase 5. Writing DER-encoded message to the file ...");
        // 5. Create DER encoding of the SignedExecutable and save it
        // into a disk file
        File derEncoding = new File(fileName + ".der");
        FileOutputStream sink = new FileOutputStream(derEncoding);
        try {
            coder.encode(signedExecutable, sink);
            sink.close();
        } catch (Exception e) {
            // In case of failure, delete the file, 
            // created to store DER encoding
            derEncoding.delete();
            throw e;
        }
    } catch (Exception e) {
        System.out.println("Signing the executable failed: " + e);
        return SIGNING_FAILED;
    } finally {
        // Destroy the instance of SignedExecutable before exit.
        if (signedExecutable != null)
            signedExecutable.delete();
    }
    
    return SUCCESS;
}

For the receive() method, follow these steps:

  1. Decode the SignedExecutable message from the input file.
  2. Get the public key certificate for the signer from the key store and verify the signature for the file received.
  3. If the signature is valid, decode the data component and save the executable code into the disk file.
public static int receive(String fileName)
{
    SignedExecutable signedExecutable = null;
    try {
        // 1. Decode DER encoded SignedExecutable from the input file
        FileInputStream source = new FileInputStream(fileName);
        Coder coder = Signed.getDERCoder();
        if (verbose) {
            System.out.println(
"***** Phase 1. Decoding signed executable ...");
            coder.enableDecoderDebugging();
        }
        signedExecutable = 
            (SignedExecutable)coder.decode(source, new SignedExecutable());
        if (verbose)
            System.out.println(
"***** Phase 2. Getting signers's certificate and initializing the Verifier ...");
        // 2. Get the public key certificate for the signer from the key
        // storage and verify the signature for the file received.
        KeyStore ks = KeyStore.getInstance("JKS");
        // load .keystore
        ks.load(new FileInputStream(".keystore"), null);
        // read the Public key
        java.security.cert.Certificate certificate = 
            ks.getCertificate(
                signedExecutable.getSignerID().stringValue());
        PublicKey publicKey = certificate.getPublicKey();
        Signature signature = Signature.getInstance("DSA");
        signature.initVerify(publicKey);
        if (verbose)
            System.out.println(
"***** Phase 2a. Verifying the signature ...");
        // Get DER encoded 'data' and verify the signature
        ByteStorage dataToVerify = 
            signedExecutable.getEncodedData();
        // Allocate internal buffer to read octets from the
        // ByteStorage
        int blockSize = 1024;
        byte[] buffer = new byte[blockSize];
        InputStream reader = dataToVerify.getReader();
        try {
            int len = -1;
            int octets = 0;
            // Read octets from the ByteStorage and compute
            // the encrypted digest.
            while ((len = reader.read(buffer)) != -1) {
                octets += len;
                signature.update(buffer, 0, len);
            }

            if (verbose)
                System.out.println(octets + " octet(s) processed.");
        } finally {
            reader.close();
        }
        // Verify the signature
        byte[] messageDigest = 
            signedExecutable.getEncryptedDigest().byteArrayValue();
        if (!signature.verify(messageDigest))
            throw new SignatureException(
                "Signature verification failed");
        if (verbose)
            System.out.println(
"***** Phase 3. Verification suceeded. Extracting the executable code ...");
        // 3. Signature verification succeeded,
        // decode the deferred 'data' component and save the
        // executable into a disk file.
        signedExecutable.decodeData(coder);
        Executable executable = signedExecutable.getData();
        if (verbose)
            System.out.println(
"***** Phase 3a. Saving the executable code into a disk file ...");
        ByteStorage executableCode = 
            executable.getCode().byteStorageValue();
        File exeFile = new File(executable.getName().stringValue());
        FileOutputStream code = 
            new FileOutputStream(exeFile);
        reader = executableCode.getReader();
        try {
            int len = -1;
            // Read octets from the ByteStorage and copy
            // them into the result output file
            while ((len = reader.read(buffer)) != -1) {
                code.write(buffer, 0, len);
            }
            code.close();
        } catch (Exception e) {
            exeFile.delete();
            throw e;
        } finally {
            reader.close();
        }
    } catch (Exception e) {
        System.out.println("Extraction of executable failed: " + e);
        return EXTRACTION_FAILED;
    } finally {
        // Destroy the instance of SignedExecutable before exit.
        if (signedExecutable != null)
            signedExecutable.delete();
    }
    
    return SUCCESS;
}

When you compile and run the sample, you get the following output:

java ExeSigner -v send utility.exe john jabberwock

***** Phase 1. Generating the SignedExecutable message ...
***** Phase 2. Generating DER encoding of 'data' ...
Executable SEQUENCE: tag = [UNIVERSAL 16] constructed; length = 189686
  name UTF8String: tag = [UNIVERSAL 12] primitive; length = 11
    0x007500740069006c006900740079002e006500780065
  code OCTET STRING: tag = [UNIVERSAL 4] primitive; length = 189668
    <ValueInFile>
***** Phase 3. Getting signer's private key and initializing the Signer ...
***** Phase 3a. Computing the digest ...
189691 octets(s) processed.
***** Phase 4. Adding the digest to SignedExecutable message ...
***** Phase 5. Writing DER-encoded message to the file ...
SignedExecutable SEQUENCE: tag = [UNIVERSAL 16] constructed; length = 189750
  encryptedDigest OCTET STRING: tag = [UNIVERSAL 4] primitive; length = 46
    0x302c02144e42e0cbf753b7226c888ed5d466fbab7645e1c102142328a8757e04222f19...
  signerID UTF8String: tag = [UNIVERSAL 12] primitive; length = 4
    0x006a006f0068006e
  digestID UTF8String: tag = [UNIVERSAL 12] primitive; length = 3
    0x004400530041
  data Executable TYPE-IDENTIFIER.&Type
    <ValueInFile>
The utility ran to completion. Result code = 0.

or

java ExeSigner -v receive utility.exe.der

***** Phase 1. Decoding signed executable ...
SignedExecutable SEQUENCE: tag = [UNIVERSAL 16] constructed; length = 189750
  encryptedDigest OCTET STRING: tag = [UNIVERSAL 4] primitive; length = 46
    0x302c02144e42e0cbf753b7226c888ed5d466fbab7645e1c102142328a8757e04222f19...
  signerID UTF8String: tag = [UNIVERSAL 12] primitive; length = 4
    0x6a6f686e
  digestID UTF8String: tag = [UNIVERSAL 12] primitive; length = 3
    0x445341
  data Executable TYPE-IDENTIFIER.&Type
    <ValueInFile>
***** Phase 2. Getting signers's certificate and initializing the Verifier ...
***** Phase 2a. Verifying the signature ...
189691 octet(s) processed.
***** Phase 3. Verification suceeded. Extracting the executable code ...
Executable SEQUENCE: tag = [UNIVERSAL 16] constructed; length = 189686
  name UTF8String: tag = [UNIVERSAL 12] primitive; length = 11
    0x7574696c6974792e657865
  code OCTET STRING: tag = [UNIVERSAL 4] primitive; length = 189668
    <ValueInFile>
***** Phase 3a. Saving the executable code into a disk file ...
The utility ran to completion. Result code = 0.

NOTE: The code assumes that the .keystore file is located in the current directory.


Improving Automatic Decoding Speed

The runtime performs automatic decoding with component relation constraints; it detects which field of the message identifies the type of the open type (the value field), and then it searches the information object set to determine the type of the value.

By default, the SOED runtime searches the information object set for the information object using linear lookup. You can change the default behavior by "indexing" the information object set.

For TOED, "indexing" is automatically enabled whenever an object set has a single UNIQUE field. In that case, the object set is internally represented as a HashMap, which provides a fast lookup.

NOTE: The following analogy between an information object set and a relational table (database) applies only to the SOED runtime.

For a large database, a query can run for hours; however, when you index the database, you quickly get the data you look for.

Indexing information object sets implies the usual speed versus space trade-off: automatic decoding runs faster at the cost of extra space consumed by the index created. For this reason, this feature is not enabled by default: you specify an additional command-line switch for the compiler and write extra code to activate the lookup.

Also, the column by which the information object set is indexed should be the one that is used in the component relation constraints to specify the type of the open type value (&id in the example below). In most cases the column can be easily identified in the definition of the information object class by the UNIQUE keyword that follows the column type (&procedureID ProcedureID UNIQUE in the example in the next section). Indexing the information object set by the wrong column will not speed up automatic decoding but will waste space.

You can index information object sets either manually or automatically. Automatic indexing is useful when a protocol specification defines a large number of information object sets (like NBAP or RANAP protocols, for example). In the first case you index an individual information object set by calling its indexByXXX() instance method. To activate automatic indexing, specify an indexing procedure that is applied to every information object set of a particular class as soon as this information object set is instantiated. The indexing procedure is set by the setIndexProcedure() method, which is the class method of the information object set class.

See Also

Indexing of individual information object sets

To use the DefaultIndex class, you subclass DefaultIndex by implementing the mapKey() method. The contract of the method is that it takes the value of the key and maps it to a numeric code. The easiest way to define a concrete subclass is to implement this abstract method with an anonymous inner class. The implementation of the mapKey() should meet the following requirements:

  1. Having two keys, K1 and K2, the mapKey() should return equal numbers for K1 and K2 when K1.equals(K2) == true.
  2. If K1 is not equal to K2 according to equals(), the mapKey() should return distinct hash codes for K1 and K2.
  3. If the information object set contains N rows, the mapKey() should map the values in the index column to the integers in the range 0..M, where M is approximately equal to N.

Example

ProcedureID ::= SEQUENCE {
    procedureCode INTEGER (0..255),
    ddMode ENUMERATED {tdd, fdd, common}
}

ELEMENTARY-PROCEDURE ::= CLASS {
    &InitiatingMessage,
    &SuccessfulOutcome OPTIONAL,
    &UnsuccessfulOutcome OPTIONAL,
    &Outcome OPTIONAL,
    &messageDiscriminator MessageDiscriminator,
    &procedureID ProcedureID UNIQUE,
    &criticality Criticality DEFAULT ignore
}

ELEMENTARY-PROCEDURES ELEMENTARY-PROCEDURE ::= {
    <list of information objects>
}

Outcome ::= SEQUENCE {
    id ELEMENTARY-PROCEDURE.&procedureID ({ELEMENTARY-PROCEDURES}),
    value ELEMENTARY-PROCEDURE.&procedureID ({ELEMENTARY-PROCEDURES}{@id})
}

If you compile the above ASN.1 with the -indexinfoobjectsets command-line option, the compiler generates the following class for the information object set:

public class ELEMENTARY_PROCEDURE_OSET extends IndexedInfoObjectSet { 
    ...
    public boolean indexByProcedureID();
    ...
}

public class MyModule extends ASN1Module { 
    ELEMENTARY_PROCEDURE_OSET eLEMENTARY_PROCEDURES =
	new ELEMENTARY_PROCEDURE_OSET(
	    new ELEMENTARY_PROCEDURE[] { ... }, "MyModule",
	    "ELEMENTARY-PROCEDURES")
	);
	...
}

The component relation constraints in the definition of the Outcome message indicate that automatic decoding will look up the information object set by the procedureID column to determine the ASN.1 type carried by the Outcome.value open type. To improve the performance of the lookup, the information object set should be indexed by the procedureID. To index ELEMENTARY-PROCEDURES by procedureID, add the following code to your application:

boolean success = MyModule.eLEMENTARY_PROCEDURES.indexByProcedureID(
    new DefaultIndex() {
	public int mapKey(AbstractData key) {
            ProcedureID id = (ProcedureID)key;   
    	    return (int)(id.getProcedureCode() +
        	id.getDdMode().longValue() * 256);
        }
    });

Note that the code uses an anonymous inner class to implement the mapKey() method. Also, the implemented mapKey() method uses the knowledge of the values in the index column (they are values of a SEQUENCE with two components: one is a whole number in the range 0..255 and the second is an ENUMERATED with a small number of enumerators). After the indexByProcedureID() method is invoked for the ELEMENTARY-PROCEDURES information object set, the runtime will use the index created each time it looks up the information object with the given value of procedureID.

Indexing all information object sets

When the protocol definition extensively uses information object sets, an application might need a better way to index such a large number of information object sets rather than invoking the indexByXXX() method per each individual information object set. The runtime allows you to specify the default indexing procedure that is automatically applied to every instance of an information object set of a particular class. To activate automatic indexing, create a class that implements the IndexProcedure interface, and associate it with the corresponding class of information object sets.

Example

PROTOCOL-IES ::= CLASS {    
    &id ProtocolIE-ID UNIQUE,    
    &criticality Criticality,
    &Value,
    &presence Presence
}
...
CommonTransportChannelSetupRequestFDD-IEs PROTOCOL-IES ::= ...
CommonTransportChannelSetupRequestTDD-IEs PROTOCOL-IES ::= ...
... (another 143 information object sets of this class)
ErrorIndication-IEs PROTOCOL-IES ::= ...

Automatic indexing is enabled for all 146 information object sets as follows:

boolean success = MyMod.PROTOCOL_IES_OSET.setIndexProcedure(
    new IndexProcedure() {
        public Index create(IndexedInfoObjectSet oset) {
            // Do not index information object sets that are empty
            // or contain just a few elements.
            if (oset.getSize() > 2) {
                Index index = new DefaultIndex() {
                    public int mapKey(AbstractData key) {
                         ProtocolIE_ID id = (ProtocolIE_ID)key;
                         return id.intValue();
                    }
                };
                PROTOCOL_IES_OSET oset_ies = 
                     (PROTOCOL_IES_OSET)oset;
                if (oset_ies.indexById(index))
                    return oset_ies.getIndex();
            }
            return null;
        }
    });

After setIndexProcedure() is invoked, all instances of PROTOCOL_IES_OSET are automatically indexed right after the information object set is instantiated. Note that this example uses two anonymous inner classes:

  • The first one is used to implement the IndexProcedure interface. Its create() method checks that the indexing of the information object set is useful. If the information object set contains just a few information objects, the hash table will not noticeably increase the performance of the lookups. To index the information object set, create() invokes the indexById() method generated by the asn1pjav compiler in the PROTOCOL_IES_OSET class.
  • The second anonymous inner class whose instance is passed into the indexById() implements the mapKey() method and inherits all the functionality from the DefaultIndex superclass. Instantiation of information object sets defined in the ASN.1 specification occurs at the time when the class for the corresponding ASN.1 module is loaded into the JVM. Since automatic indexing is performed in the constructor of the information object set, the above code should be executed before the corresponding ASN1Module is loaded. The preferred place to enable automatic indexing of information object sets is the application's startup code.

Lazy indexing

It is not likely that the application will use all 240 information object sets at once. Most indexes that were automatically created for these numerous info object sets will be dead bulk that simply consumes memory. For this reason, you might prefer to implement lazy indexing or indexing by demand. The idea is that you provide, via the IndexProcedure, a dummy implementation of Index that does not do any indexing but serves as an indicator that the information object set should be indexed later. Actual indexing occurs as soon as the runtime will need to access this index to look for an information object in the information object set.

Example

public class DummyIndex implements Index {
   protected PROTOCOL_IES_OSET mOset;
   public DummyIndex(PROTOCOL_IES_OSET oset)
   {
	mOset = oset;
   }

   public Enumeration lookup(AbstractData key)
   {
	// This is the first lookup in the associated info object set.
	// Compute index and replace this dummy index with a real one.
	Index index = new DefaultIndex() {
	    public int mapKey(AbstractData key)
	    {
		...
	    }
	};
	if (mOset.indexById(index)) {
	    // Indexing succeeded. Lookup the key using the new index.
	    index = mOset.getIndex();
	    return index.lookup(key);
	} else
	    // Indexing failed. indexById() has cleared the index from the
            // information object set. Inform the caller (the lookup() of
	    // the PROTOCOL_IES_OSET) that it should fall back to linear
	    // search.
	    return null;
    }
    // All methods below are dummies 
    public Index add(AbstractData key, InfoObject row)
    {
	return this;
    }
    public Index delete(AbstractData key, InfoObject row)
    {
	return this;
    }
    public Index reset()
    {
	return this;
    }
}

First, you implement DummyIndex that "listens" for the invocation of lookup(). The lookup() method of Index is invoked by the lookup() method of the information object set that is associated with this Index. Listening for the invocation of lookup(), the DummyIndex knows when the runtime accesses the index for the first time (will attempt to lookup the information object set using this index). As soon as DummyIndex.lookup() is invoked, it performs the actual indexing of the information object set and the lookup using the index created. Note that the call to indexById() replaces the reference to DummyIndex in the mOset. All further lookups are redirected to DefaultIndex.lookup() (the lookup by means of the hash table).

A special case to consider is when lazy indexing fails to create the actual index, for example, because the information object set contains rows with duplicate values of the index column. In such a case, the DummyIndex.lookup() has no suitable index to lookup mOset for the information object with the &id matching the key. For this reason, it delegates the lookup to the caller (returns null). The caller (IndexedInfoObjectSet.lookup()) checks the return value of Index.lookup() and falls back to linear search if the value returned is null.

Finally, you activate lazy indexing by calling the setIndexProcedure() method of the PROTOCOL_IES_OSET class (note that PROTOCOL_IES_OSET is defined just before the Lazy Indexing section):

// Activate lazy indexing for information object sets of the 
// PROTOCOL_IES_OSET class

boolean success = PROTOCOL_IES_OSET.setIndexProcedure(new IndexProcedure() {
    public Index create(IndexedInfoObjectSet oset)
    {
	PROTOCOL_IES_OSET os = (PROTOCOL_IES_OSET)oset;
	os.indexByID(new DummyIndex(os));
	return os.getIndex();
    }
});

The above code implements the IndexProcedure as an anonymous inner class.

Implementing a custom Index

If the default implementation does not fit your needs, you can create your own implementation of the Index interface. This implementation can be based on any advanced indexing technique, such as B-tree, binary tree, or other flavor of the hash table. Here's how the custom implementation can use the flexibility of the Index interface:

ProcedureID ::= INTEGER

MY-CLASS ::= CLASS {
    &id ProcedureID UNIQUE,
    &Type
}

MyInfoObjectSet MY-CLASS ::= {
    {&id 1, &Type INTEGER} |
    {&id 2, &Type UTF8String},
    ...
}

Note that the index field &id in MyInfoObjectSet takes consecutive INTEGER values. In this case, the hash table can be avoided altogether by reducing the lookup to a call to getElement(int atIndex):

public class LinearIndex implements Index {
    protected MY_CLASS_OSET oset = null;
    public LinearIndex(MY_CLASS_OSET ios)
    {
	oset = ios;
    }
    pubic Enumeration lookup(AbstractData key)
    {
	int atIndex = ((INTEGER)key).intValue();
	final Object row = (atIndex > 0 && atIndex <= oset.getSize) ?
	    oset.getElement(atIndex-1) : null;
	// We will reuse SingularEnumeration utility class, defined
	// in the DefaultIndex.
	return new DefaultIndex.SingularEnumeration(row);
    }
    public Index add(AbstractData key, InfoObject row)
    {
	// Check that key value matches the position of the row in the
	// information object set
	int atPosition = ((INTEGER)key).intValue();
	if (atPosition < 1 || atPosition > oset.getSize() &&
	    !row.equals(oset.getElement(atPosition-1)))
	    // The linearity is broken. Take some appropriate action.
	else
	    return this;
    }
    ...
}

Here is how you activate this custom Index in your application code:

MY_CLASS_OSET oset = MyModule.myInfoObjectSet;   
oset.indexById(new LinearIndex(oset));

When MyInfoObjectSet is defined as extensible, new information objects can be added to this information object set at run time. How should LinearIndex handle the case when the application adds a new information object {&id 10, &Type OBJECT IDENTIFIER} to the information object set? The run 1,2,10 is no longer linear. The corresponding code in the add() method can take two possible actions:

  1. Terminate indexing (return null to the caller).
  2. Switch to another indexing method that does not assume that the values of the index column are easily mappable to consecutive integer numbers.

To implement (2), you can write a complex class that originally behaves like LinearIndex, but falls back to the DefaultIndex that is based on the hash table:

public class SmartIndex {

    boolean isLinear = true;
    Index index = null;
    MY_CLASS_OSET oset = null;

    public SmartIndex(MY_CLASS_OSET ios)
    {
	oset = ios;
    }

    public Enumeration lookup(AbstractData key)
    {
	if (isLinear) {
	    // Behave like the LinearIndex, i.e. use getElement(int) to
	    // retrieve the row with the given value of the index column.
	} else {
	    // Use the 'index' hash table to lookup the row with the
	    // given value of the index column.
	}
    }

    public Index add(AbstractData key, InfoObject row)
    {
	if (isLinear) {
	    if ('row' breaks the linearity) {
		isLinear = false;
		index = new DefaultIndex() {
		    public int mapKey(AbstractData key)
		    {
			return ((INTEGER)key).intValue());
		    }
		};
		// Add all information objects in the information object set
		// to the 'index', including the new row.
		Enumeration elements = oset.elements();
		while (elements.hasMoreElements()) {
		    MY_CLASS obj = (MY_CLASS)elements.nextElement();
		    AbstractData key = obj.getId();
		    if (index.put(key, obj) != null)
			// Abort indexing in case of duplicate key
			return null;
		}
	    }
	} else {
	    AbstractData key = ((MY_CLASS)row).getId();
	    if (index.put(key, row) != null)
		// Abort indexing in case of duplicate key
		return null;
	}
	return this;
    }
    ...
}

A better choice is to have two smaller (and thus more manageable classes) than one complex class. Since the add() method returns Index, this is done as follows:

public class LinearIndex {
    ...
    
    public Index add(AbstractData key, InfoObject row)
    {
	// Check that the key value matches the position of the row in the
	// information object set
	int atPosition = ((INTEGER)key).intValue();
	if (atPosition < 1 || atPosition > oset.getSize() &&
	    !row.equals(oset.getElement(atPosition-1))) {
	    // The linearity is broken. Fall back to some generic indexing
	    // (say, to default implementation, based on hash table).
	    Index genericIndex = new DefaultIndex() {
		public int mapKey(AbstractData key)
		{
		    return ((INTEGER)key).intValue();
		}
	    };
	    // Add all the information objects that are in the information
	    // object set to the new index
	    oset.indexById(genericIndex);
	    // If indexById() fails, oset.getIndex() will return null.
	    return oset.getIndex();
	} else
	    return this;
    }
} 

Creating Threads

To achieve thread safety during encoding and decoding, you can create a new Coder for each individual thread.

Example

The following class creates a Java Thread that can be used to encode instances of any class generated by the compiler that represents a PDU. The project-package-name is Example.

import java.io.*;
import com.oss.asn1.*;

public class EncodePDUThread implements Runnable {

    private Coder               mCoder;
    private AbstractData        mObject;
    private OutputStream        mSink;
    private Thread              mThread;

    public EncodePDUThread(AbstractData object, OutputStream sink) {
           // create a new instance of the default Coder
        mCoder = Example.getDefaultCoder();
        mObject = object;
        mSink = sink;
        mThread = new Thread(this);
    }

	public void run() {
    	try {
         if (mObject.isEncodable())
            mCoder.encode(mObject, mSink);
         else
            System.out.println
              (mObject.getClass().getName() + " is not a PDU!");
    	} catch(Exception e) {
            // It would be wiser to check for all possible exceptions
            // here, but for the sake of brevity, we'll skip it.
            System.out.println(e);
		}
	}

	public void start() {
      mThread.start();
	}

	public void stop() {
      mThread.stop();
	}
}

This documentation applies to the OSS® ASN.1 Tools for Java release 8.7 and later.

Copyright © 2024 OSS Nokalva, Inc. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise, without the prior permission of OSS Nokalva, Inc.
Every distributed copy of the OSS® ASN.1 Tools for Java is associated with a specific license and related unique license number. That license determines, among other things, what functions of the OSS ASN.1 Tools for Java are available to you.