/hardware/interfaces/neuralnetworks/1.3/ |
D | IBuffer.hal | 45 * @param dimensions Updated dimensional information. If the dimensions of the IBuffer object 46 * are not fully specified, then the dimensions must be fully specified here. If the 47 * dimensions of the IBuffer object are fully specified, then the dimensions may be empty 48 * here. If dimensions.size() > 0, then all dimensions must be specified here, and any 54 * - INVALID_ARGUMENT if provided memory is invalid, or if the dimensions is invalid 56 copyFrom(memory src, vec<uint32_t> dimensions) generates (ErrorStatus status);
|
D | types.t | 255 * For a scalar operand, dimensions.size() must be 0. 257 * A tensor operand with all dimensions specified has "fully 258 * specified" dimensions. Whenever possible (i.e., whenever the 259 * dimensions are known at model construction time), a tensor 261 * specified dimensions, in order to enable the best possible 264 * If a tensor operand's dimensions are not fully specified, the 265 * dimensions of the operand are deduced from the operand 266 * dimensions and values of the operation for which that operand 268 * {@link OperationType::WHILE} operation input operand dimensions in the 271 * In the following situations, a tensor operand's dimensions must [all …]
|
D | types.hal | 97 * dimensions. The output is the sum of both input tensors, optionally 100 * Two dimensions are compatible when: 105 * input operands. It starts with the trailing dimensions, and works its 129 * * 1: A tensor of the same {@link OperandType}, and compatible dimensions 151 * The output dimensions are functions of the filter dimensions, stride, and 237 * dimensions except the dimension along the concatenation axis. 279 * The output dimensions are functions of the filter dimensions, stride, and 444 * The output dimensions are functions of the filter dimensions, stride, and 600 * and width dimensions. The value block_size indicates the input block size 727 * * 0: The output tensor, of the same {@link OperandType} and dimensions as [all …]
|
/hardware/interfaces/neuralnetworks/1.3/vts/functional/ |
D | MemoryDomainTests.cpp | 76 static_cast<OperandType>(operand.type), operand.dimensions); in createDummyData() 88 .dimensions = {}, in createInt32Scalar() 104 .dimensions = {operand.dimensions[3], 3, 3, operand.dimensions[3]}, in createConvModel() 112 .dimensions = {operand.dimensions[3]}, in createConvModel() 159 .dimensions = {}, in createSingleAddModel() 241 kTestOperand.dimensions)) {} in MemoryDomainTestBase() 277 .dimensions = {1, 32, 32, 8}, 286 .dimensions = {1, 32, 32, 8}, 295 .dimensions = {1, 32, 32, 8}, 304 .dimensions = {1, 32, 32, 8}, [all …]
|
D | Utils.cpp | 82 if (isTensor(operand.type) && operand.dimensions.size() == 0) return 0; in sizeOfData() 83 return std::accumulate(operand.dimensions.begin(), operand.dimensions.end(), dataSize, in sizeOfData()
|
D | ValidateModel.cpp | 106 .dimensions = {}, in addOperand() 227 size += sizeForBinder(operand.dimensions); in sizeForBinder() 437 model->main.operands[operand].dimensions = in mutateOperandRankTest() 779 newOperand.dimensions = hidl_vec<uint32_t>(); in mutateOperand() 786 newOperand.dimensions = in mutateOperand() 787 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 792 newOperand.dimensions = in mutateOperand() 793 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 800 newOperand.dimensions = in mutateOperand() 801 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() [all …]
|
D | GeneratedTestHarness.cpp | 233 .dimensions = op.dimensions, in createSubgraph() 332 auto& dims = model->main.operands[i].dimensions; in makeOutputDimensionsUnspecified() 388 inputs[i] = {.hasNoValue = false, .location = loc, .dimensions = {}}; in createRequest() 398 inputs[i] = {.hasNoValue = false, .location = loc, .dimensions = {}}; in createRequest() 414 outputs[i] = {.hasNoValue = false, .location = loc, .dimensions = {}}; in createRequest() 432 outputs[i] = {.hasNoValue = false, .location = loc, .dimensions = {}}; in createRequest() 756 const auto& actual = outputShapes[i].dimensions; in EvaluatePreparedModel() 758 testModel.main.operands[testModel.main.outputIndexes[i]].dimensions; in EvaluatePreparedModel() 776 const auto& expect = testModel.main.operands[testModel.main.outputIndexes[i]].dimensions; in EvaluatePreparedModel() 777 const std::vector<uint32_t> actual = outputShapes[i].dimensions; in EvaluatePreparedModel()
|
/hardware/interfaces/neuralnetworks/1.1/ |
D | types.hal | 33 * dimensions of shape block_shape + [batch], interleaves these blocks back 34 * into the grid defined by the spatial dimensions [1, ..., M], to obtain a 63 * dimensions. The output is the result of dividing the first input tensor 66 * Two dimensions are compatible when: 71 * input operands. It starts with the trailing dimensions, and works its way 86 * * 1: A tensor of the same {@link OperandType}, and compatible dimensions 98 * Computes the mean of elements across dimensions of a tensor. 100 * Reduces the input tensor along the given dimensions to reduce. Unless 102 * in axis. If keep_dims is true, the reduced dimensions are retained with 113 * * 1: A 1-D Tensor of {@link OperandType::TENSOR_INT32}. The dimensions [all …]
|
/hardware/interfaces/neuralnetworks/1.0/vts/functional/ |
D | BasicTests.cpp | 81 .dimensions = {1}, in TEST_P() 91 .dimensions = {1}, in TEST_P() 101 .dimensions = {}, in TEST_P() 111 .dimensions = {1}, in TEST_P() 121 .dimensions = {1}, in TEST_P() 131 .dimensions = {1}, in TEST_P()
|
D | Utils.cpp | 118 inputs[i] = {.hasNoValue = false, .location = loc, .dimensions = {}}; in createRequest() 140 outputs[i] = {.hasNoValue = false, .location = loc, .dimensions = {}}; in createRequest() 213 if (isTensor(operand.type) && operand.dimensions.size() == 0) return 0; in sizeOfData() 214 return std::accumulate(operand.dimensions.begin(), operand.dimensions.end(), dataSize, in sizeOfData()
|
D | ValidateModel.cpp | 79 .dimensions = {}, in addOperand() 174 size += sizeForBinder(operand.dimensions); in sizeForBinder() 345 model->operands[operand].dimensions = std::vector<uint32_t>(invalidRank, 0); in mutateOperandRankTest() 649 newOperand.dimensions = hidl_vec<uint32_t>(); in mutateOperand() 654 newOperand.dimensions = in mutateOperand() 655 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 660 newOperand.dimensions = in mutateOperand() 661 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 665 newOperand.dimensions = in mutateOperand() 666 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand()
|
/hardware/interfaces/neuralnetworks/1.0/ |
D | types.t | 190 * For a scalar operand, dimensions.size() must be 0. 192 * For a tensor operand, dimensions.size() must be at least 1; 193 * however, any of the dimensions may be unspecified. 195 * A tensor operand with all dimensions specified has "fully 196 * specified" dimensions. Whenever possible (i.e., whenever the 197 * dimensions are known at model construction time), a tensor 199 * specified dimensions, in order to enable the best possible 202 * If a tensor operand's dimensions are not fully specified, the 203 * dimensions of the operand are deduced from the operand 204 * dimensions and values of the operation for which that operand [all …]
|
D | types.hal | 26 * scalar values and must have no dimensions. 84 * dimensions. The output is the sum of both input tensors, optionally 87 * Two dimensions are compatible when: 92 * input operands. It starts with the trailing dimensions, and works its 109 * * 1: A tensor of the same {@link OperandType}, and compatible dimensions 127 * The output dimensions are functions of the filter dimensions, stride, and 199 * dimensions except the dimension along the concatenation axis. 231 * The output dimensions are functions of the filter dimensions, stride, and 327 * The output dimensions are functions of the filter dimensions, stride, and 419 * and width dimensions. The value block_size indicates the input block size [all …]
|
/hardware/interfaces/neuralnetworks/1.1/vts/functional/ |
D | BasicTests.cpp | 88 .dimensions = {1}, in TEST_P() 98 .dimensions = {1}, in TEST_P() 108 .dimensions = {}, in TEST_P() 118 .dimensions = {1}, in TEST_P() 128 .dimensions = {1}, in TEST_P() 138 .dimensions = {1}, in TEST_P()
|
D | ValidateModel.cpp | 98 .dimensions = {}, in addOperand() 193 size += sizeForBinder(operand.dimensions); in sizeForBinder() 368 model->operands[operand].dimensions = std::vector<uint32_t>(invalidRank, 0); in mutateOperandRankTest() 681 newOperand.dimensions = hidl_vec<uint32_t>(); in mutateOperand() 686 newOperand.dimensions = in mutateOperand() 687 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 692 newOperand.dimensions = in mutateOperand() 693 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 697 newOperand.dimensions = in mutateOperand() 698 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand()
|
/hardware/qcom/neuralnetworks/hvxservice/1.0/ |
D | HexagonModel.cpp | 38 .dimensions = operand.dimensions, in getOperandsInfo() 111 .dimensions = mOperands[operand].dimensions, in getShape() 121 mOperands[operand].dimensions = shape.dimensions; in setShape() 149 std::vector<uint32_t> dims = getAlignedDimensions(operand.dimensions, 4); in addOperand() 202 std::vector<uint32_t> dims = getAlignedDimensions(mOperands[operand].dimensions, 4); in createConvFilterTensor() 224 std::vector<uint32_t> dims = getAlignedDimensions(mOperands[operand].dimensions, 4); in createDepthwiseFilterTensor() 233 std::vector<uint32_t> dims = getAlignedDimensions(mOperands[operand].dimensions, 4); in createFullyConnectedWeightTensor() 298 outputs.push_back(make_hexagon_nn_output(operand.dimensions, getSize(operand.type))); in getHexagonOutputs() 429 make_hexagon_nn_output(mOperands[outputs[0]].dimensions, sizeof(uint8_t)); in addFusedQuant8Operation() 431 make_hexagon_nn_output(mOperands[outputs[0]].dimensions, sizeof(int32_t)); in addFusedQuant8Operation() [all …]
|
D | HexagonOperationsCheck.cpp | 86 HEXAGON_SOFT_ASSERT_NE(getPadding(inShape.dimensions[2], inShape.dimensions[1], in pool() 97 nn::calculateExplicitPadding(inShape.dimensions[2], stride_width, filter_width, in pool() 99 nn::calculateExplicitPadding(inShape.dimensions[1], stride_height, filter_height, in pool() 180 getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in conv_2d() 181 stride_height, filterShape.dimensions[2], filterShape.dimensions[1], in conv_2d() 189 nn::calculateExplicitPadding(inputShape.dimensions[2], stride_width, in conv_2d() 190 filterShape.dimensions[2], padding_implicit, &padding_left, in conv_2d() 192 nn::calculateExplicitPadding(inputShape.dimensions[1], stride_height, in conv_2d() 193 filterShape.dimensions[1], padding_implicit, &padding_top, in conv_2d() 240 getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in depthwise_conv_2d() [all …]
|
D | HexagonOperationsPrepare.cpp | 79 pad = getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in average_pool_2d() 115 const int32_t dims = model->getShape(ins[0]).dimensions.size(); in concatenation() 151 pad = getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in conv_2d() 152 stride_height, filterShape.dimensions[2], filterShape.dimensions[1], in conv_2d() 201 pad = getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in depthwise_conv_2d() 202 stride_height, filterShape.dimensions[2], filterShape.dimensions[1], in depthwise_conv_2d() 267 pad = getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in l2_pool_2d() 351 pad = getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in max_pool_2d() 534 pad = getPadding(inputShape.dimensions[2], inputShape.dimensions[1], stride_width, in average_pool_2d() 574 const int32_t dims = model->getShape(ins[0]).dimensions.size(); in concatenation() [all …]
|
D | HexagonUtils.cpp | 113 std::vector<uint32_t> dimensions(N - dims.size(), 1); in getAlignedDimensions() local 114 dimensions.insert(dimensions.end(), dims.begin(), dims.end()); in getAlignedDimensions() 115 return dimensions; in getAlignedDimensions() 281 ", .dimensions: " + toString(shape.dimensions.data(), shape.dimensions.size()) + in toString()
|
/hardware/interfaces/neuralnetworks/1.2/vts/functional/ |
D | Utils.cpp | 78 if (isTensor(operand.type) && operand.dimensions.size() == 0) return 0; in sizeOfData() 79 return std::accumulate(operand.dimensions.begin(), operand.dimensions.end(), dataSize, in sizeOfData()
|
D | BasicTests.cpp | 162 .dimensions = {1}, in TEST_P() 172 .dimensions = {1}, in TEST_P() 182 .dimensions = {}, in TEST_P() 192 .dimensions = {1}, in TEST_P() 202 .dimensions = {1}, in TEST_P() 212 .dimensions = {1}, in TEST_P()
|
D | ValidateModel.cpp | 99 .dimensions = {}, in addOperand() 220 size += sizeForBinder(operand.dimensions); in sizeForBinder() 418 model->operands[operand].dimensions = std::vector<uint32_t>(invalidRank, 0); in mutateOperandRankTest() 751 newOperand.dimensions = hidl_vec<uint32_t>(); in mutateOperand() 758 newOperand.dimensions = in mutateOperand() 759 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 764 newOperand.dimensions = in mutateOperand() 765 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() 772 newOperand.dimensions = in mutateOperand() 773 operand->dimensions.size() > 0 ? operand->dimensions : hidl_vec<uint32_t>({1}); in mutateOperand() [all …]
|
D | GeneratedTestHarness.cpp | 104 .dimensions = op.dimensions, in createModel() 178 auto& dims = model->operands[i].dimensions; in makeOutputDimensionsUnspecified() 325 const auto& expect = testModel.main.operands[testModel.main.outputIndexes[i]].dimensions; in EvaluatePreparedModel() 326 const std::vector<uint32_t> actual = outputShapes[i].dimensions; in EvaluatePreparedModel()
|
/hardware/interfaces/neuralnetworks/1.2/ |
D | types.t | 205 * For a scalar operand, dimensions.size() must be 0. 207 * A tensor operand with all dimensions specified has "fully 208 * specified" dimensions. Whenever possible (i.e., whenever the 209 * dimensions are known at model construction time), a tensor 211 * specified dimensions, in order to enable the best possible 214 * If a tensor operand's dimensions are not fully specified, the 215 * dimensions of the operand are deduced from the operand 216 * dimensions and values of the operation for which that operand 219 * In the following situations, a tensor operand's dimensions must 226 * specified dimensions must either be present in the [all …]
|
D | types.hal | 81 * The size of the scales array must be equal to dimensions[channelDim]. 84 * The channel dimension of this tensor must not be unknown (dimensions[channelDim] != 0). 160 * dimensions. The output is the sum of both input tensors, optionally 163 * Two dimensions are compatible when: 168 * input operands. It starts with the trailing dimensions, and works its 190 * * 1: A tensor of the same {@link OperandType}, and compatible dimensions 208 * The output dimensions are functions of the filter dimensions, stride, and 292 * dimensions except the dimension along the concatenation axis. 328 * The output dimensions are functions of the filter dimensions, stride, and 479 * The output dimensions are functions of the filter dimensions, stride, and [all …]
|