Quantcast
Channel: Intel Developer Zone Articles
Viewing all 183 articles
Browse latest View live

Intel® IPP Functions Optimized for Intel® Advanced Vector Extensions 2 (Intel® AVX2)

$
0
0

Here is a list of Intel® Integrated Performance Primitives (Intel® IPP) functions that are optimized for Intel® Advanced Vector Extensions 2 (AVX2) on Haswell and Intel® microarchitecture code name Skylake. These functions include Convert, CrossCorr, Max/Min, PolarToCart, Sort, and some other arithmetic functions. The functions listed here are all hand-tuned for Intel® architecture. Intel IPP functions that are not listed here also get optimization benefit from Intel® Compiler. 

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

ippiConvert_16s16u_C1Rs
ippiConvert_16s32f_C1R
ippiConvert_16s32s_C1R
ippiConvert_16s8u_C1R
ippiConvert_16u32f_C1R
ippiConvert_16u32s_C1R
ippiConvert_16u8u_C1R
ippiConvert_16s8s_C1RSfs
ippiConvert_16u16s_C1RSfs
ippiConvert_16u8s_C1RSfs
ippiConvert_32f16s_C1RSfs
ippiConvert_32f16u_C1RSfs
ippiConvert_32f32s_C1RSfs
ippiConvert_32f8s_C1RSfs
ippiConvert_32f8u_C1RSfs
ippiCopy_16u_C1MR
ippiCopy_16u_C3MR
ippiCopy_32s_C1MR
ippiCopy_32s_C3MR
ippiCopy_32s_C4MR
ippiCopy_8u_C1MR
ippiCopy_8u_C1R
ippiCopy_8u_C3MR
ippiCopy_8u_C3P3R
ippiCopy_8u_C4P4R
ippiCopyConstBorder_16s_C3R
ippiCopyConstBorder_16s_C4R
ippiCopyConstBorder_16u_C1R
ippiCopyConstBorder_16u_C3R
ippiCopyConstBorder_16u_C4R
ippiCopyConstBorder_32f_C3R
ippiCopyConstBorder_32f_C4R
ippiCopyConstBorder_32s_C3R
ippiCopyConstBorder_32s_C4R
ippiCopyConstBorder_8u_C3R
ippiCopyConstBorder_8u_C4R
ippiCopyReplicateBorder_16s_C1IR
ippiCopyReplicateBorder_16s_C1R
ippiCopyReplicateBorder_16s_C3IR
ippiCopyReplicateBorder_16s_C3R
ippiCopyReplicateBorder_16s_C4IR
ippiCopyReplicateBorder_16s_C4R
ippiCopyReplicateBorder_16u_C1IR
ippiCopyReplicateBorder_16u_C1R
ippiCopyReplicateBorder_16u_C3IR
ippiCopyReplicateBorder_16u_C3R
ippiCopyReplicateBorder_16u_C4IR
ippiCopyReplicateBorder_16u_C4R
ippiCopyReplicateBorder_32f_C1IR
ippiCopyReplicateBorder_32f_C1R
ippiCopyReplicateBorder_32f_C3IR
ippiCopyReplicateBorder_32f_C3R
ippiCopyReplicateBorder_32f_C4IR
ippiCopyReplicateBorder_32f_C4R
ippiCopyReplicateBorder_32s_C1IR
ippiCopyReplicateBorder_32s_C1R
ippiCopyReplicateBorder_32s_C3IR
ippiCopyReplicateBorder_32s_C3R
ippiCopyReplicateBorder_32s_C4IR
ippiCopyReplicateBorder_32s_C4R
ippiCopyReplicateBorder_8u_C1IR
ippiCopyReplicateBorder_8u_C1R
ippiCopyReplicateBorder_8u_C3IR
ippiCopyReplicateBorder_8u_C3R
ippiCopyReplicateBorder_8u_C4IR
ippiCopyReplicateBorder_8u_C4R
ippiCopyMirrorBorder_16s_C1IR
ippiCopyMirrorBorder_16s_C1R
ippiCopyMirrorBorder_16s_C3IR
ippiCopyMirrorBorder_16s_C3R
ippiCopyMirrorBorder_16s_C4IR
ippiCopyMirrorBorder_16s_C4R
ippiCopyMirrorBorder_16u_C1IR
ippiCopyMirrorBorder_16u_C1R
ippiCopyMirrorBorder_16u_C3IR
ippiCopyMirrorBorder_16u_C3R
ippiCopyMirrorBorder_16u_C4IR
ippiCopyMirrorBorder_16u_C4R
ippiCopyMirrorBorder_32f_C1IR
ippiCopyMirrorBorder_32f_C1R
ippiCopyMirrorBorder_32f_C3IR
ippiCopyMirrorBorder_32f_C3R
ippiCopyMirrorBorder_32f_C4IR
ippiCopyMirrorBorder_32f_C4R
ippiCopyMirrorBorder_32s_C1IR
ippiCopyMirrorBorder_32s_C1R
ippiCopyMirrorBorder_32s_C3IR
ippiCopyMirrorBorder_32s_C3R
ippiCopyMirrorBorder_32s_C4IR
ippiCopyMirrorBorder_32s_C4R
ippiCopyMirrorBorder_8u_C1IR
ippiCopyMirrorBorder_8u_C1R
ippiCopyMirrorBorder_8u_C3IR
ippiCopyMirrorBorder_8u_C3R
ippiCopyMirrorBorder_8u_C4IR
ippiCopyMirrorBorder_8u_C4R
ippiCrossCorrNorm_32f_C1R
ippiCrossCorrNorm_16u32f_C1R
ippiCrossCorrNorm_8u32f_C1R
ippiCrossCorrNorm_8u_C1RSfs
ippiDilateBorder_32f_C1R
ippiDilateBorder_32f_C3R
ippiDilateBorder_32f_C4R
ippiDilateBorder_8u_C1R
ippiDilateBorder_8u_C3R
ippiDilateBorder_8u_C4R
ippiDistanceTransform_3x3_8u_C1R
ippiDistanceTransform_3x3_8u32f_C1R
ippiErodeBorder_32f_C1R
ippiErodeBorder_32f_C3R
ippiErodeBorder_32f_C4R
ippiErodeBorder_8u_C1R
ippiErodeBorder_8u_C3R
ippiErodeBorder_8u_C4R
ippiFilterBoxBorder_16s_C1R
ippiFilterBoxBorder_16s_C3R
ippiFilterBoxBorder_16s_C4R
ippiFilterBoxBorder_16u_C1R
ippiFilterBoxBorder_16u_C3R
ippiFilterBoxBorder_16u_C4R
ippiFilterBoxBorder_32f_C1R
ippiFilterBoxBorder_32f_C3R
ippiFilterBoxBorder_32f_C4R
ippiFilterBoxBorder_8u_C1R
ippiFilterBoxBorder_8u_C3R
ippiFilterBoxBorder_8u_C4R
ippiFilterLaplacianBorder_32f_C1R
ippiFilterLaplacianBorder_8u16s_C1R
ippiFilterMaxBorder_32f_C1R
ippiFilterMaxBorder_32f_C3R
ippiFilterMaxBorder_32f_C4R
ippiFilterMaxBorder_8u_C1R
ippiFilterMaxBorder_8u_C3R
ippiFilterMaxBorder_8u_C4R
ippiFilterMedianBorder_16s_C1R
ippiFilterMedianBorder_16u_C1R
ippiFilterMedianBorder_32f_C1R
ippiFilterMedianBorder_8u_C1R
ippiFilterMinBorder_32f_C1R
ippiFilterMinBorder_32f_C3R
ippiFilterMinBorder_32f_C4R
ippiFilterMinBorder_8u_C1R
ippiFilterMinBorder_8u_C3R
ippiFilterMinBorder_8u_C4R
ippiFilterScharrHorizMaskBorder_16s_C1R
ippiFilterScharrHorizMaskBorder_32f_C1R
ippiFilterScharrHorizMaskBorder_8u16s_C1R
ippiFilterScharrVertMaskBorder_16s_C1R
ippiFilterScharrVertMaskBorder_8u16s_C1R
ippiGetCentralMoment_64f
ippiGetNormalizedCentralMoment_64f
ippiGetSpatialMoment_64f
ippiHarrisCorner_32f_C1R
ippiHarrisCorner_8u32f_C1R
ippiHistogramEven_8u_C1R
ippiHoughLine_Region_8u32f_C1R
ippiLUTPalette_8u_C3R
ippiLUTPalette_8u_C4R
ippiMax_16s_C1R
ippiMax_16u_C1R
ippiMax_32f_C1R
ippiMax_8u_C1R
ippiMin_16s_C1R
ippiMin_16u_C1R
ippiMin_32f_C1R
ippiMin_8u_C1R
ippiMinEigenVal_32f_C1R
ippiMinEigenVal_8u32f_C1R
ippiMirror_16s_C1IR
ippiMirror_16s_C1R
ippiMirror_16s_C3IR
ippiMirror_16s_C3R
ippiMirror_16s_C4IR
ippiMirror_16s_C4R
ippiMirror_16u_C1IR
ippiMirror_16u_C1R
ippiMirror_16u_C3IR
ippiMirror_16u_C3R
ippiMirror_16u_C4IR
ippiMirror_16u_C4R
ippiMirror_32f_C1IR
ippiMirror_32f_C1R
ippiMirror_32f_C3IR
ippiMirror_32f_C3R
ippiMirror_32f_C4IR
ippiMirror_32f_C4R
ippiMirror_32s_C1IR
ippiMirror_32s_C1R
ippiMirror_32s_C3IR
ippiMirror_32s_C3R
ippiMirror_32s_C4IR
ippiMirror_32s_C4R
ippiMirror_8u_C1IR
ippiMirror_8u_C1R
ippiMirror_8u_C3IR
ippiMirror_8u_C3R
ippiMirror_8u_C4IR
ippiMirror_8u_C4R
ippiMoments64f_16u_C1R
ippiMoments64f_32f_C1R
ippiMoments64f_8u_C1R
ippiMul_16s_C1RSfs
ippiMul_16u_C1RSfs
ippiMul_32f_C1R
ippiMul_8u_C1RSfs
ippiMulC_16s_C1IRSfs
ippiMulC_32f_C1R
ippiSet_16s_C1MR
ippiSet_16s_C3MR
ippiSet_16s_C4MR
ippiSet_16u_C1MR
ippiSet_16u_C3MR
ippiSet_16u_C4MR
ippiSet_32f_C1MR
ippiSet_32f_C3MR
ippiSet_32f_C4MR
ippiSet_32s_C1MR
ippiSet_32s_C3MR
ippiSet_32s_C4MR
ippiSet_8u_C1MR
ippiSet_8u_C3MR
ippiSet_8u_C4MR
ippiSqr_32f_C1R
ippiSqrDistanceNorm_32f_C1R
ippiSqrDistanceNorm_8u32f_C1R
ippiSwapChannels_16u_C4R
ippiSwapChannels_32f_C4R
ippiSwapChannels_8u_C4R
ippiThreshold_GT_16s_C1R
ippiThreshold_GT_32f_C1R
ippiThreshold_GT_8u_C1R
ippiThreshold_GTVal_16s_C1R
ippiThreshold_GTVal_32f_C1R
ippiThreshold_GTVal_8u_C1R
ippiThreshold_LTVal_16s_C1R
ippiThreshold_LTVal_32f_C1R
ippiThreshold_LTVal_8u_C1R
ippiTranspose_16s_C1IR
ippiTranspose_16s_C1R
ippiTranspose_16s_C3IR
ippiTranspose_16s_C3R
ippiTranspose_16s_C4IR
ippiTranspose_16s_C4R
ippiTranspose_16u_C1IR
ippiTranspose_16u_C1R
ippiTranspose_16u_C3IR
ippiTranspose_16u_C3R
ippiTranspose_16u_C4IR
ippiTranspose_16u_C4R
ippiTranspose_32f_C1IR
ippiTranspose_32f_C1R
ippiTranspose_32f_C3IR
ippiTranspose_32f_C3R
ippiTranspose_32f_C4IR
ippiTranspose_32f_C4R
ippiTranspose_32s_C1IR
ippiTranspose_32s_C1R
ippiTranspose_32s_C3IR
ippiTranspose_32s_C3R
ippiTranspose_32s_C4IR
ippiTranspose_32s_C4R
ippiTranspose_8u_C1IR
ippiTranspose_8u_C1R
ippiTranspose_8u_C3IR
ippiTranspose_8u_C3R
ippiTranspose_8u_C4IR
ippiTranspose_8u_C4R
ippsDotProd_32f64f
ippsDotProd_64f
ippsFlip_16u_I
ippsFlip_32f_I
ippsFlip_64f_I
ippsFlip_8u_I
ippsMagnitude_32f
ippsMagnitude_64f
ippsMaxEvery_16u
ippsMaxEvery_32f
ippsMaxEvery_64f
ippsMaxEvery_8u
ippsMinEvery_16u
ippsMinEvery_32f
ippsMinEvery_64f
ippsMinEvery_8u
ippsPolarToCart_32f
ippsPolarToCart_64f
ippsSortAscend_16s_I
ippsSortAscend_16u_I
ippsSortAscend_32f_I
ippsSortAscend_32s_I
ippsSortAscend_64f_I
ippsSortAscend_8u_I
ippsSortDescend_16s_I
ippsSortDescend_16u_I
ippsSortDescend_32f_I
ippsSortDescend_32s_I
ippsSortDescend_64f_I
ippsSortDescend_8u_I
ippsSortIndexAscend_16s_I
ippsSortIndexAscend_16u_I
ippsSortIndexAscend_32f_I
ippsSortIndexAscend_32s_I
ippsSortIndexAscend_64f_I
ippsSortIndexAscend_8u_I
ippsSortIndexDescend_16s_I
ippsSortIndexDescend_16u_I
ippsSortIndexDescend_32f_I
ippsSortIndexDescend_32s_I
ippsSortIndexDescend_64f_I
ippsSortIndexDescend_8u_I
 
ippiAdd_8u_C1RSfs
ippiAdd_16u_C1RSfs
ippiAdd_16s_C1RSfs
ippiAdd_32f_C1R
ippiSub_8u_C1RSfs
ippiSub_16u_C1RSfs
ippiSub_16s_C1RSfs
ippiSub_32f_C1R
ippiMaxEvery_8u_C1R
ippiMaxEvery_16u_C1R
ippiMaxEvery_32f_C1R
ippiMinEvery_8u_C1R
ippiMinEvery_16u_C1R
ippiMinEvery_32f_C1R
ippiAnd_8u_C1R
ippiOr_8u_C1R
ippiXor_8u_C1R
ippiNot_8u_C1R
ippiCompare_8u_C1R
ippiCompare_16u_C1R
ippiCompare_16s_C1R
ippiCompare_32f_C1R
ippiSum_8u_C1R 
ippiSum_8u_C3R 
ippiSum_8u_C4R 
ippiSum_16u_C1R
ippiSum_16u_C3R
ippiSum_16u_C4R
ippiSum_16s_C1R
ippiSum_16s_C3R
ippiSum_16s_C4R
ippiSum_32f_C1R
ippiSum_32f_C3R
ippiSum_32f_C4R
ippiMean_8u_C1R 
ippiMean_8u_C3R 
ippiMean_8u_C4R 
ippiMean_16u_C1R
ippiMean_16u_C3R
ippiMean_16u_C4R
ippiMean_16s_C1R
ippiMean_16s_C3R
ippiMean_16s_C4R
ippiMean_32f_C1R
ippiMean_32f_C3R
ippiMean_32f_C4R
ippiNorm_Inf_8u_C1R
ippiNorm_Inf_8u_C3R 
ippiNorm_Inf_8u_C4R 
ippiNorm_Inf_16u_C1R
ippiNorm_Inf_16u_C3R
ippiNorm_Inf_16u_C4R
ippiNorm_Inf_16s_C1R
ippiNorm_Inf_16s_C3R
ippiNorm_Inf_16s_C4R
ippiNorm_Inf_32f_C1R
ippiNorm_Inf_32f_C3R
ippiNorm_Inf_32f_C4R
ippiNorm_L1_8u_C1R
ippiNorm_L1_8u_C3R 
ippiNorm_L1_8u_C4R 
ippiNorm_L1_16u_C1R
ippiNorm_L1_16u_C3R
ippiNorm_L1_16u_C4R
ippiNorm_L1_16s_C1R
ippiNorm_L1_16s_C3R
ippiNorm_L1_16s_C4R
ippiNorm_L1_32f_C1R
ippiNorm_L1_32f_C3R
ippiNorm_L1_32f_C4R
ippiNorm_L2_8u_C1R
ippiNorm_L2_8u_C3R 
ippiNorm_L2_8u_C4R 
ippiNorm_L2_16u_C1R
ippiNorm_L2_16u_C3R
ippiNorm_L2_16u_C4R
ippiNorm_L2_16s_C1R
ippiNorm_L2_16s_C3R
ippiNorm_L2_16s_C4R
ippiNorm_L2_32f_C1R
ippiNorm_L2_32f_C3R
ippiNorm_L2_32f_C4R
ippiNormRel_Inf_8u_C1R
ippiNormRel_Inf_16u_C1R
ippiNormRel_Inf_16s_C1R
ippiNormRel_Inf_32f_C1R
ippiNormRel_L1_8u_C1R
ippiNormRel_L1_16u_C1R
ippiNormRel_L1_16s_C1R
ippiNormRel_L1_32f_C1R
ippiNormRel_L2_8u_C1R
ippiNormRel_L2_16u_C1R
ippiNormRel_L2_16s_C1R
ippiNormRel_L2_32f_C1R
ippiNormDiff_Inf_8u_C1R
ippiNormDiff_Inf_8u_C3R 
ippiNormDiff_Inf_8u_C4R 
ippiNormDiff_Inf_16u_C1R
ippiNormDiff_Inf_16u_C3R
ippiNormDiff_Inf_16u_C4R
ippiNormDiff_Inf_16s_C1R
ippiNormDiff_Inf_16s_C3R
ippiNormDiff_Inf_16s_C4R
ippiNormDiff_Inf_32f_C1R
ippiNormDiff_Inf_32f_C3R
ippiNormDiff_Inf_32f_C4R
ippiNormDiff_L1_8u_C1R
ippiNormDiff_L1_8u_C3R 
ippiNormDiff_L1_8u_C4R 
ippiNormDiff_L1_16u_C1R
ippiNormDiff_L1_16u_C3R
ippiNormDiff_L1_16u_C4R
ippiNormDiff_L1_16s_C1R
ippiNormDiff_L1_16s_C3R
ippiNormDiff_L1_16s_C4R
ippiNormDiff_L1_32f_C1R
ippiNormDiff_L1_32f_C3R
ippiNormDiff_L1_32f_C4R
ippiNormDiff_L2_8u_C1R
ippiNormDiff_L2_8u_C3R 
ippiNormDiff_L2_8u_C4R 
ippiNormDiff_L2_16u_C1R
ippiNormDiff_L2_16u_C3R
ippiNormDiff_L2_16u_C4R
ippiNormDiff_L2_16s_C1R
ippiNormDiff_L2_16s_C3R
ippiNormDiff_L2_16s_C4R
ippiNormDiff_L2_32f_C1R
ippiNormDiff_L2_32f_C3R
ippiNormDiff_L2_32f_C4R
ippiSwapChannels_8u_C3C4R
ippiSwapChannels_16u_C3C4R
ippiSwapChannels_32f_C3C4R
ippiSwapChannels_8u_C4C3R
ippiSwapChannels_16u_C4C3R
ippiSwapChannels_32f_C4C3R
ippiSwapChannels_8u_C3R
ippiSwapChannels_16u_C3R
ippiSwapChannels_32f_C3R
ippiSwapChannels_8u_AC4R
ippiSwapChannels_16u_AC4R
ippiSwapChannels_32f_AC4R
ippiCopy_8u_AC4C3R
ippiCopy_16u_AC4C3R
ippiCopy_32f_AC4C3R
ippiCopy_8u_P3C3R
ippiCopy_16u_P3C3R
ippiCopy_32f_P3C3R
ippiMulC_32f_C1IR
ippiSet_8u_C1R
ippiSet_16u_C1R
ippiSet_32f_C1R
ippiSet_8u_C3R
ippiSet_16u_C3R
ippiSet_32f_C3R
ippiSet_8u_C4R
ippiSet_16u_C4R
ippiWarpAffineBack_8u_C1R 
ippiWarpAffineBack_8u_C3R 
ippiWarpAffineBack_8u_C4R 
ippiWarpAffineBack_16u_C1R
ippiWarpAffineBack_16u_C3R
ippiWarpAffineBack_16u_C4R
ippiWarpAffineBack_32f_C1R
ippiWarpAffineBack_32f_C3R
ippiWarpAffineBack_32f_C4R
ippiWarpPerspectiveBack_8u_C1R 
ippiWarpPerspectiveBack_8u_C3R 
ippiWarpPerspectiveBack_8u_C4R 
ippiWarpPerspectiveBack_16u_C1R
ippiWarpPerspectiveBack_16u_C3R
ippiWarpPerspectiveBack_16u_C4R
ippiWarpPerspectiveBack_32f_C1R
ippiWarpPerspectiveBack_32f_C3R
ippiWarpPerspectiveBack_32f_C4R
ippiCopySubpixIntersect_8u_C1R
ippiCopySubpixIntersect_8u32f_C1R
ippiCopySubpixIntersect_32f_C1R
ippiSqrIntegral_8u32f64f_C1R
ippiIntegral_8u32f_C1R
ippiSqrIntegral_8u32s64f_C1R
ippiIntegral_8u32s_C1R
ippiHaarClassifierFree_32f
ippiHaarClassifierInitAlloc_32f
ippiHaarClassifierFree_32f
ippiRectStdDev_32f_C1R
ippiApplyHaarClassifier_32f_C1R
ippiAbsDiff_8u_C1R
ippiAbsDiff_16u_C1R
ippiAbsDiff_32f_C1R
ippiMean_8u_C1MR 
ippiMean_16u_C1MR
ippiMean_32f_C1MR
ippiMean_8u_C3CMR 
ippiMean_16u_C3CMR
ippiMean_32f_C3CMR
ippiMean_StdDev_8u_C1MR 
ippiMean_StdDev_16u_C1MR
ippiMean_StdDev_32f_C1MR
ippiMean_StdDev_8u_C3CMR 
ippiMean_StdDev_16u_C3CMR
ippiMean_StdDev_32f_C3CMR
ippiMean_StdDev_8u_C1R 
ippiMean_StdDev_16u_C1R
ippiMean_StdDev_32f_C1R
ippiMean_StdDev_8u_C3CR 
ippiMean_StdDev_16u_C3CR
ippiMean_StdDev_32f_C3CR
ippiMinMaxIndx_8u_C1MR 
ippiMinMaxIndx_16u_C1MR
ippiMinMaxIndx_32f_C1MR
ippiMinMaxIndx_8u_C1R 
ippiMinMaxIndx_16u_C1R
ippiMinMaxIndx_32f_C1R
ippiNorm_Inf_8u_C1MR
ippiNorm_Inf_8s_C1MR 
ippiNorm_Inf_16u_C1MR
ippiNorm_Inf_32f_C1MR
ippiNorm_L1_8u_C1MR
ippiNorm_L1_8s_C1MR 
ippiNorm_L1_16u_C1MR
ippiNorm_L1_32f_C1MR
ippiNorm_L2_8u_C1MR
ippiNorm_L2_8s_C1MR 
ippiNorm_L2_16u_C1MR
ippiNorm_L2_32f_C1MR
ippiNorm_Inf_8u_C3CMR
ippiNorm_Inf_8s_C3CMR 
ippiNorm_Inf_16u_C3CMR
ippiNorm_Inf_32f_C3CMR
ippiNorm_L1_8u_C3CMR
ippiNorm_L1_8s_C3CMR 
ippiNorm_L1_16u_C3CMR
ippiNorm_L1_32f_C3CMR
ippiNorm_L2_8u_C3CMR
ippiNorm_L2_8s_C3CMR 
ippiNorm_L2_16u_C3CMR
ippiNorm_L2_32f_C3CMR
ippiNormRel_Inf_8u_C1MR
ippiNormRel_Inf_8s_C1MR 
ippiNormRel_Inf_16u_C1MR
ippiNormRel_Inf_32f_C1MR
ippiNormRel_L1_8u_C1MR
ippiNormRel_L1_8s_C1MR 
ippiNormRel_L1_16u_C1MR
ippiNormRel_L1_32f_C1MR
ippiNormRel_L2_8u_C1MR
ippiNormRel_L2_8s_C1MR 
ippiNormRel_L2_16u_C1MR
ippiNormRel_L2_32f_C1MR
ippiNormDiff_Inf_8u_C1MR
ippiNormDiff_Inf_8s_C1MR 
ippiNormDiff_Inf_16u_C1MR
ippiNormDiff_Inf_32f_C1MR
ippiNormDiff_L1_8u_C1MR
ippiNormDiff_L1_8s_C1MR 
ippiNormDiff_L1_16u_C1MR
ippiNormDiff_L1_32f_C1MR
ippiNormDiff_L2_8u_C1MR
ippiNormDiff_L2_8s_C1MR 
ippiNormDiff_L2_16u_C1MR
ippiNormDiff_L2_32f_C1MR
ippiNormDiff_Inf_8u_C3CMR
ippiNormDiff_Inf_8s_C3CMR 
ippiNormDiff_Inf_16u_C3CMR
ippiNormDiff_Inf_32f_C3CMR
ippiNormDiff_L1_8u_C3CMR
ippiNormDiff_L1_8s_C3CMR 
ippiNormDiff_L1_16u_C3CMR
ippiNormDiff_L1_32f_C3CMR
ippiNormDiff_L2_8u_C3CMR 
ippiNormDiff_L2_8s_C3CMR 
ippiNormDiff_L2_16u_C3CMR
ippiNormDiff_L2_32f_C3CMR
ippiFilterRowBorderPipelineGetBufferSize_32f_C1R
ippiFilterRowBorderPipelineGetBufferSize_32f_C3R
ippiFilterRowBorderPipeline_32f_C1R
ippiFilterRowBorderPipeline_32f_C3R
ippiDistanceTransform_5x5_8u32f_C1R
ippiTrueDistanceTransform_8u32f_C1R
ippiTrueDistanceTransformGetBufferSize_8u32f_C1R
ippiFilterScharrVertGetBufferSize_32f_C1R
 ippiFilterScharrVertMaskBorderGetBufferSize
ippiFilterScharrVertBorder_32f_C1R
 ippiFilterScharrVertMaskBorder_32f_C1R
ippiFilterScharrHorizGetBufferSize_32f_C1R
 ippiFilterScharrHorizMaskBorderGetBufferSize
ippiFilterScharrHorizBorder_32f_C1R
ippiFilterSobelNegVertGetBufferSize_8u16s_C1R
ippiFilterSobelNegVertBorder_8u16s_C1R
ippiFilterSobelHorizBorder_8u16s_C1R
ippiFilterSobelVertSecondGetBufferSize_8u16s_C1R
ippiFilterSobelVertSecondBorder_8u16s_C1R
ippiFilterSobelHorizSecondGetBufferSize_8u16s_C1R
ippiFilterSobelHorizSecondBorder_8u16s_C1R
ippiFilterSobelNegVertGetBufferSize_32f_C1R
ippiFilterSobelNegVertBorder_32f_C1R
ippiFilterSobelHorizGetBufferSize_32f_C1R
ippiFilterSobelHorizBorder_32f_C1R
ippiFilterSobelVertSecondGetBufferSize_32f_C1R
ippiFilterSobelVertSecondBorder_32f_C1R
ippiFilterSobelHorizSecondGetBufferSize_32f_C1R
ippiFilterSobelHorizSecondBorder_32f_C1R
ippiColorToGray_8u_C3C1R
ippiColorToGray_16u_C3C1R
ippiColorToGray_32f_C3C1R
ippiColorToGray_8u_AC4C1R
ippiColorToGray_16u_AC4C1R
ippiColorToGray_32f_AC4C1R
ippiRGBToGray_8u_C3C1R
ippiRGBToGray_16u_C3C1R
ippiRGBToGray_32f_C3C1R
ippiRGBToGray_8u_AC4C1R
ippiRGBToGray_16u_AC4C1R
ippiRGBToGray_32f_AC4C1R
ippiRGBToXYZ_8u_C3R
ippiRGBToXYZ_16u_C3R
ippiRGBToXYZ_32f_C3R
ippiXYZToRGB_8u_C3R
ippiXYZToRGB_16u_C3R
ippiXYZToRGB_32f_C3R
ippiRGBToHSV_8u_C3R
ippiRGBToHSV_16u_C3R
ippiHSVToRGB_8u_C3R
ippiHSVToRGB_16u_C3R
ippiRGBToHLS_8u_C3R
ippiRGBToHLS_16u_C3R
ippiRGBToHLS_32f_C3R
ippiHLSToRGB_8u_C3R
ippiHLSToRGB_16u_C3R
ippiHLSToRGB_32f_C3R
 ippiDotProd_8u64f_C1R
 ippiDotProd_16u64f_C1R
 ippiDotProd_16s64f_C1R
 ippiDotProd_32u64f_C1R
 ippiDotProd_32s64f_C1R
 ippiDotProd_32f64f_C1R
 ippiDotProd_8u64f_C3R
 ippiDotProd_16u64f_C3R
 ippiDotProd_16s64f_C3R
 ippiDotProd_32u64f_C3R
 ippiDotProd_32s64f_C3R
 ippiDotProd_32f64f_C3R
 ippiDotProd_8u64f_C4R
 ippiDotProd_16u64f_C4R
 ippiDotProd_16s64f_C4R
 ippiDotProd_32u64f_C4R
 ippiDotProd_32s64f_C4R
 ippiDotProd_32f64f_C4R

Intel® Collaboration Suite for WebRTC Simplifies Adding Real-Time Communication to Your Applications

$
0
0

Download PDF [PDF 569 KB]

Overview

Web-based real-time communication (WebRTC) is an open standard proposed by both World Wide Web Consortium (W3C) and Internet Engineering Task Force (IETF) that allows browser-to-browser applications to support voice calling, video chat, and peer-to-peer (P2P) data transmission. End users can use their browsers for real-time communication without the need for any additional clients or plugins.

The WebRTC standard is gaining significant momentum and is currently fully supported by open standard browsers such as Google Chrome*, Mozilla Firefox*, and Opera*. Microsoft also announced its Edge* browser support on Object RTC (ORTC), which will be interoperable with WebRTC.

To ease adoption of this WebRTC technology and make it widely available to expand or create new applications, Intel has developed the end-to-end WebRTC solution, Intel® Collaboration Suite for WebRTC (Intel® CS for WebRTC). Intel CS for WebRTC is highly optimized for Intel® platforms, including Intel® Xeon® processor-based products such as Intel® Visual Compute Accelerator card, Intel® CoreTM processor-based desktop products, and Intel® AtomTM processor-based mobile products.

You can download Intel CS for WebRTC from http://webrtc.intel.com at no charge. It includes the following main components:

  • Intel CS for WebRTC Conference Server– enables not only P2P-style communication, but also efficient WebRTC-based video conferencing.
  • Intel CS for WebRTC Gateway Server for SIP– provides the WebRTC connectivity into session initiation protocol (SIP) conferences.
  • Intel CS for WebRTC Client SDK– allows you to develop WebRTC apps using JavaScript* APIs, Internet Explorer* plugin for WebRTC, Android* native apps using Java* APIs, iOS* native apps using Objective-C* APIs, or Windows* native apps using C++ APIs.
  • Intel CS for WebRTC User Documentation – includes complete online documentation available on the WebRTC website http://webrtc.intel.com, with sample code, installation instructions, and API descriptions.

Problems with Existing WebRTC-Based RTC Solutions

WebRTC-based RTC solutions change the way people communicate, bringing real-time communication to the browser. However, as a new technology, WebRTC-based solutions require improvements in the following areas to be as complete as traditional RTC solutions.

  • Mostly P2P communication based. The WebRTC standard itself as well as Google WebRTC open source reference implementation only focuses on peer-to-peer (P2P) communication, limiting most of the WebRTC-based solutions to two-party communication. Although some WebRTC solutions support multi-party chat, these solutions use mesh network topology, which is less efficient and can only support a few attendees for common client devices.
  • Not fully accounting for client usage preferences. Although browsers are available for multiple platforms, not all users like browsers. That is, many mobile platform end-users prefer native apps, such as Android apps or iOS apps. Additionally, some commonly used browsers, such as Internet Explorer, still do not natively support WebRTC.
  • Lack of flexibility on MCU server. Some WebRTC-based solutions support multipoint control unit (MCU) servers for multi-party communication. However, most of those MCU servers use a router/forward solution, which just forwards the publishers’ streams to the subscribers. Although this method fulfills part of the scenarios when clients have equivalent capabilities or SVC/Simulcast is supported, it becomes a high requirement for clients to easily meet. To work with a wide variety of devices, MCU servers must do some media-specific processing, such as transcoding and mixing.
  • Limited deployment mode choices for customers. Most of the existing WebRTC-based RTC solutions work as a service model hosted by service providers. This style provides all the benefits of a cloud service, but is not useful for service providers and those who want to host the service themselves for data sensitive consideration.

Key differentiation of Intel® CS for WebRTC

Fully Functional WebRTC-Based Audio/Video Communication

Intel CS for WebRTC not only offers peer-to-peer WebRTC communication, but it also supports WebRTC-based multi-party video conferencing and provides the WebRTC client connectivity to other traditional video conferences, like SIP. For video conferencing, it provides router and mixer solutions simultaneously to handle complex customer scenarios. Additionally, it supports:

  • H.264 and VP8 video codecs for input and output streams
  • MCU multi-streaming
  • Real-time streaming protocol (RTSP) stream input
  • Customized video layout definition plus runtime control
  • Voice activity detection (VAD) controlled video switching
  • Flexible media recording

Easy to Deploy, Scale, and Integrate

Intel CS for WebRTC Conference and Gateway Servers provide pluggable integration modules as well as open APIs to work with existing enterprise systems. They easily scale to cluster mode and serve larger number of users with an increase of cluster node numbers. In addition, the Intel solution provides comprehensive client SDKs including JavaScript SDK, Android native SDK, iOS native SDK, and Windows native SDK to help customers quickly expand their client applications with video communication capabilities.

High-Performance Media Processing Capability

Intel CS for WebRTC MCU and Gateway servers are built on top of Intel® Media Server Studio, optimized for Intel® Core™ processors and Intel® Xeon® processor E3 family with Intel® Iris™ graphics, Intel® Iris™ Pro graphics, and Intel® HD graphics technology.

The client SDKs, including the Android native SDK and Windows C++ SDK, use the mobile and desktop platforms’ hardware media processing capabilities to improve the user experience. That is, the Android native SDK is optimized for Intel® Atom™ platforms (all Intel® Atom™ x3, x5, and x7 processor series) focusing on video power and performance, as well as end-to-end latency. The Windows C++ SDK also uses the media processing acceleration of the Intel® Core™ processor-based platforms (i3, i5, i7) for consistent HD video communication.

Secure, Intelligent, Reliable QoS Control Support

Intel CS for WebRTC solution ensures video communication data security through HTTPS, secure WebSocket, SRTP/DTLS, etc. Also the intelligent quality of service (QoS) control, e.g., NACK, FEC, and dynamic bitrate control, guarantees the communication quality between clients and servers against high packet loss and network bandwidth variance. Experiments listed in Figure 1 have shown that the Intel video engine handles up to 20% packet loss and 200ms delay.

Figure 1. Packet Loss Protection Results with QoS Control
Figure 1. Packet Loss Protection Results with QoS Control

Full Functional Video Communication with Intel CS for WebRTC Conference Servers

Flexible Communication Modes

Intel CS for WebRTC offers both peer-to-peer video call and MCU-based multi-party video conference communication modes.

A typical WebRTC usage scenario is direct peer-to-peer video call. After connecting to the signaling server, users can invite other parties for P2P video communication. All video, audio, and data streams are transported directly between each other. Meanwhile, the signaling messages for discovery and control go through the signaling server. As Figure 2 shows, Intel provides a reference signaling server implementation called Peer Server with source code included. Customers can construct their own signaling server based on this Peer Server or replace the whole Peer Server with an existing channel. The client SDK also provides the customization mechanism to let users implement their own signaling channel adapter.

Figure 2. P2P Video Communication with Peer Server
Figure 2. P2P Video Communication with Peer Server

Intel CS for WebRTC solution further offers the MCU-based multi-party video conference chat. All streams go through the MCU server the same as the signaling messages do as Figure 3 shows. This reduces the stream traffic and computing overhead on client devices compared to a mesh network solution.

Figure 3. Multi-party Video Conference Chat through MCU Server
Figure 3. Multi-party Video Conference Chat through MCU Server

Unlike most existing WebRTC MCUs, which usually work as a router to forward media streams for clients, Intel CS for WebRTC MCU server also handles the media processing and allows a wide range of devices to be used in the conference. Users can subscribe to either the forward streams or mixed streams from MCU server. Based on Intel Iris Pro graphics or Intel HD graphics technology, media processing on the MCU server can achieve excellent cost-performance ratio.

The Intel MCU provides more flexibility on mixed streams. You can generate multiple video resolution mixed streams to adapt to various client devices with different media processing capability and network bandwidth.

External Input for RTSP Streams

Intel CS for WebRTC allows bridging a wider range of devices into the conference by supporting external inputs from RTSP streams. This means almost all RTSP compatible devices, including IP cameras, can join the video conference. The IP camera support opens up usage scenarios and applications in security, remote education, remote healthcare, etc.

Mixed-Stream Layout Definition and Runtime Region Control

Through Intel CS for WebRTC video layout definition interface, which is an expanded version of RFC-5707 (MSML), you can define any rectangle-style video layout for conference, according to the runtime participant numbers. Figure 4 shows the video layout for one conference. The meeting contains 5 different layouts with 1, 2, 3, 4, or 5-6 participants.

Figure 4. Example Video Layouts
Figure 4. Example Video Layouts

Figure 5 describes the detailed layout regions for a maximum of 2 participants. The region with id number 1 is always the primary region of this layout.

Figure 5. Example Video Layout Definition and Effect
Figure 5. Example Video Layout Definition and Effect

Intel CS for WebRTC MCU also supports automatic voice-activated video switching through voice activity detection (VAD). The user most active on voice is switched to the primary region which is the yellow part of Figure 6.

Figure 6. Example Video Layouts with Primary Region
Figure 6. Example Video Layouts with Primary Region

You can also assign any stream to any region as needed during runtime for flexible video layout design of the conference.

Flexible Conference Recording

When recording in Intel CS for WebRTC, you can select any video feed and any audio feed. You not only can record switching across different streams that the conference room is offering (such as mixed and forward streams), but also select video and audio tracks separately from different streams. You can select the audio track from the mixed stream of participants and video track from the screen-sharing stream.

Scaling the Peer Server Reference Implementation

Although the Peer Server that Intel provides is a signaling server reference implementation for signal node, you can extend it to a distributed and large scale platform by refactoring the implementation. See Figure 7 for a scaling proposal.

Figure 7. Peer Server Cluster Scaling Proposal
Figure 7. Peer Server Cluster Scaling Proposal

Scaling the MCU Conference Server

The Intel CS for WebRTC MCU server was designed to be a distributed framework with separate components, including manager node, signaling nodes, accessing nodes, media processing nodes, etc. Those components are easy to scale and suitable for cloud deployment.

Figure 8 shows an example from the conference server user guide for deploying an MCU server cluster.

Figure 8. MCU Conference Server Cluster Deployment Example
Figure 8. MCU Conference Server Cluster Deployment Example

Interoperability with Intel CS for WebRTC Gateway

For legacy video conference solutions to adopt the WebRTC advantage on the client side, Intel CS for WebRTC provides the WebRTC gateway.

Key Functionality Offering

Intel CS for WebRTC gateway for SIP not only provides the basic signaling and protocol translation between WebRTC and SIP, it also provides the real-time media transcoding between VP8 and H.264 to address the video codec preference difference between them. In addition, the gateway keeps the sessions mapping between WebRTC and SIP to support bi-directional video calls. Figure 9 briefly shows how SIP devices can connect with WebRTC terminals through the Gateway Intel provided.

Figure 9. Connect WebRTC with SIP Terminals through the Gateway
Figure 9. Connect WebRTC with SIP Terminals through the Gateway

Validated SIP Environments

Note: See Intel CS for WebRTC Release Notes for current validated environments

Cloud Deployment

The Intel CS for WebRTC gateway instances are generally session-based. Each session is independent, so sessions are easily scalable to multiple instances for cloud deployment. You can make the gateway instance management a component of your existing conference system scheduling policy and achieve load balancing for the gateway.

Comprehensive Intel CS for WebRTC Client SDKs

The Intel CS for WebRTC also provides comprehensive client SDKs to help you easily implement all the functionalities that the server provides. The client SDKs allow client apps to communicate with remote clients or join conference meetings. Basic features include audio/video communication, data transmission, and screen sharing. P2P mode also supports a customized signaling channel that can be easily integrated into existed IT infrastructures.

Client SDKs include JavaScript SDK, Android SDK, iOS SDK, and Windows SDK. Current features are listed in Table 1.

Table 1. Client SDK Features

#Partial support: for JavaScript SDK H.264 video codec support, only valid when browser WebRTC engine supports it.

Customized Signaling Channel

In addition to the default Peer Server, Intel CS for WebRTC client SDK for P2P chat provides simple customizable interfaces to allow you to implement and integrate with your own signaling channel through the extensible messaging and presence protocol (XMPP) server channel. Figure 10 shows there is a separated signaling channel model in client SDK for P2P chat and allows user to customize.

Figure 10. Customized Signaling Channel in Client SDK for P2P Chat
Figure 10. Customized Signaling Channel in Client SDK for P2P Chat

Hardware Media Processing Acceleration

On Android platforms, VP8/H.264 decoding/encoding hardware acceleration is enabled if the underlying platform includes corresponding MediaCodec plugins. For Windows, H.264 decoding/encoding and VP8 decoding hardware acceleration is enabled with DXVA-based HMFT or Intel Media SDK. For iOS, H.264 encoding/decoding is hardware-accelerated through Video Toolbox framework. Table 2 below shows hardware acceleration for WebRTC on different platforms.

Table 2.Hardware Media Acceleration Status for Client SDKs

#Conditional support: only enabled if the platform level enables VP8 hardware codec

NAT Traversal

Interactive Connectivity Establishment (ICE) helps devices connect to each other in various complicated Network Address Translation (NAT) conditions. The client SDKs support Session Traversal Utilities for NAT (STUN) and Traversal Using Relay NAT (TURN) servers. Figure 11 and Figure 12 show how client SDKs perform NAT traversal through STUN or TURN servers.

Figure 11. NAT Traversal with STUN Server
Figure 11. NAT Traversal with STUN Server

Figure 12. NAT Traversal with TURN Server
Figure 12. NAT Traversal with TURN Server

Fine-Grained Media & Network Parameter Control

Client SDKs further allow you to choose the video or audio source and its resolution and frame rate, the preferred video codec, and maximum bandwidth for video/audio streams.

Real-Time Connection Status Retrieval

Client SDKs provide APIs to retrieve real-time network and audio/video quality conditions. You can reduce the resolution or switch to an audio only stream if the network quality is not good, or adjust audio levels if audio quality is poor. Table 3 lists connection status information supported by client SDKs.

Table 3. Connection Status Information supported by Client SDKs

Conclusion

Based on WebRTC technology, Intel® Collaboration Suite for WebRTC builds an end-to-end solution, allowing you to enhance your applications with Internet video communication capabilities. The acceleration from Intel’s media processing platforms on the client and server sides, such as the Intel® Visual Compute Accelerator, improves the client user experience as well as the server side cost-effectiveness.

Additional Information

For more information, please visit the following web pages:
 

Intel Visual Compute Accelerator:
http://www.intel.com/content/www/us/en/servers/media-and-graphics/visual-compute-accelerator.html
http://www.intel.com/visualcloud

Intel Collaboration Suite for WebRTC:
http://webrtc.intel.com
https://software.intel.com/en-us/forums/webrtc
https://software.intel.com/zh-cn/forums/webrtc

The Internet Engineering Task Force (IETF) Working Group:
http://tools.ietf.org/wg/rtcweb/

W3C WebRTC Working Group:
http://www.w3.org/2011/04/webrtc/

WebRTC Open Project:
http://www.webrtc.org

Acknowledgements (alphabetical)

Elmer Amaya, Jianjun Zhu, Jianlin Qiu, Kreig DuBose, Qi Zhang, Shala Arshi, Shantanu Gupta, Yuqiang Xian

About the Author

Lei Zhai is the engineering manager in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). His engineering team focuses on Intel® Collaboration Suite of WebRTC product development and its optimization on IA platforms.

Intel® XDK FAQs - Debug & Test

$
0
0

What are the requirements for Testing on Wi-Fi?

  1. Both Intel XDK and App Preview mobile app must be logged in with the same user credentials.
  2. Both devices must be on the same subnet.

Note: Your computer's Security Settings may be preventing Intel XDK from connecting with devices on your network. Double check your settings for allowing programs through your firewall. At this time, testing on Wi-Fi does not work within virtual machines.

How do I configure app preview to work over Wi-Fi?

  1. Ensure that both Intel XDK and App Preview mobile app are logged in with the same user credentials and are on the same subnet
  2. Launch App Preview on the device
  3. Log into your Intel XDK account
  4. Select "Local Apps" to see a list of all the projects in Intel XDK Projects tab
  5. Select desired app from the list to run over Wi-Fi

Note: Ensure the app source files are referenced from the right source directory. If it isn't, on the Projects Tab, change the 'source' directory so it is the same as the 'project' directory and move everything in the source directory to the project directory. Remove the source directory and try to debug over local Wi-Fi.

How do I clear app preview cache and memory?

[Android*] Simply kill the app running on your device as an Active App on Android* by swiping it away after clicking the "Recent" button in the navigation bar. Alternatively, you can clear data and cache for the app from under Settings App > Apps > ALL > App Preview.

[iOS*] By double tapping the Home button then swiping the app away.

[Windows*] You can use the Windows* Cache Cleaner app to do so.

What are the Android* devices supported by App Preview?

We officially only support and test Android* 4.x and higher, although you can use Cordova for Android* to build for Android* 2.3 and above. For older Android* devices, you can use the build system to build apps and then install and run them on the device to test. To help in your testing, you can include the weinre script tag from the Test tab in your app before you build your app. After your app starts up, you should see the Test tab console light up when it sees the weinre script tag contact the device (push the "begin debugging on device" button to see the console). Remember to remove the weinre script tag before you build for the store.

What do I do if Intel XDK stops detecting my Android* device?

When Intel XDK is not running, kill all adb processes that are running on your workstation and then restart Intel XDK as conflicts between different versions of adb frequently causes such issues. Ensure that applications such as Eclipse that run copies of adb are not running. You may scan your disk for copies of adb:

[Linux*/OS X*]:

$ sudo find / -name adb -type f 

[Windows*]:

> cd \> dir /s adb.exe

For more information on Android* USB debug, visit the Intel XDK documentation on debugging and testing.

How do I debug an app that contains third party Cordova plugins?

See the Debug and Test Overview doc page for a more complete overview of your debug options.

When using the Test tab with Intel App Preview your app will not include any third-party plugins, only the "core" Cordova plugins.

The Emulate tab will load the JavaScript layer of your third-party plugins, but does not include a simulation of the native code part of those plugins, so it will present you with a generic "return" dialog box to allow you to execute code associated with third-party plugins.

When debugging Android devices with the Debug tab, the Intel XDK creates a custom debug module that is then loaded onto your USB-connected Android device, allowing you to debug your app AND its third-party Cordova plugins. When using the Debug tab with an iOS device only the "core" Cordova plugins are available in the debug module on your USB-connected iOS device.

If the solutions above do not work for you, then your best bet for debugging an app that contains a third-party plugin is to build it and debug the built app installed and running on your device. 

[Android*]

1) For Crosswalk* or Cordova for Android* build, create an intelxdk.config.additions.xml file that contains the following lines:

<!-- Change the debuggable preference to true to build a remote CDT debuggable app for --><!-- Crosswalk* apps on Android* 4.0+ devices and Cordova apps on Android* 4.4+ devices. --><preference name="debuggable" value="true" /><!-- Change the debuggable preference to false before you build for the store. --> 

and place it in the root directory of your project (in the same location as your other intelxdk.config.*.xml files). Note that this will only work with Crosswalk* on Android* 4.0 or newer devices or, if you use the standard Cordova for Android* build, on Android* 4.4 or greater devices.

2) Build the Android* app

3) Connect your device to your development system via USB and start app

4) Start Chrome on your development system and type "chrome://inspect" in the Chrome URL bar. You should see your app in the list of apps and tabs presented by Chrome, you can then push the "inspect" link to get a full remote CDT session to your built app. Be sure to close Intel XDK before you do this, sometimes there is interference between the version of adb used by Chrome and that used by Intel XDK, which can cause a crash. You might have to kill the adb process before you start Chrome (after you exit the Intel XDK).

[iOS*]

Refer to the instructions on the updated Debug tab docs to get on-device debugging. We do not have the ability to build a development version of your iOS* app yet, so you cannot use this technique to build iOS* apps. However, you can use the weinre script from the Test tab into your iOS* app when you build it and use the Test tab to remotely access your built iOS* app. This works best if you include a lot of console.log messages.

[Windows* 8]

You can use the test tab which would give you a weinre script. You can include it in the app that you build, run it and connect to the weinre server to work with the console.

Alternatively, you can use App Center to setup and access the weinre console (go here and use the "bug" icon).

Another approach is to write console.log messages to a <textarea> screen on your app. See either of these apps for an example of how to do that:

Why does my device show as offline on Intel XDK Debug?

“Media” mode is the default USB connection mode, but due to some unidentified reason, it frequently fails to work over USB on Windows* machines. Configure the USB connection mode on your device for "Camera" instead of "Media" mode.

What do I do if my remote debugger does not launch?

You can try the following to have your app run on the device via debug tab:

  • Place the intelxdk.js library before the </body> tag
  • Place your app specific JavaScript files after it
  • Place the call to initialize your app in the device ready event function

Why do I get an "error installing App Preview Crosswalk" message when trying to debug on device?

You may be running into a RAM or storage problem on your Android device; as in, not enough RAM available to load and install the special App Preview Crosswalk app (APX) that must be installed on your device. See this site (http://www.devicespecifications.com) for information regarding your device. If your device has only 512 MB of RAM, which is a marginal amount for use with the Intel XDK Debug tab, you may have difficulties getting APX to install.

You may have to do one or all of the following:

  • remove as many apps from RAM as possible before installing APX (reboot the device is the simplest approach)
  • make sure there is sufficient storage space in your device (uninstall any unneeded apps on the device)
  • install APX by hand

The last step is the hardest, but only if you are uncomfortable with the command-line:

  1. while attempting to install APX (above) the XDK downloaded a copy of the APK that must be installed on your Android device
  2. find that APK that contains APX
  3. install that APK manually onto your Android device using adb

To find the APK, on a Mac:

$ cd ~/Library/Application\ Support/XDK
$ find . -name *apk

To find the APK, on a Windows machine:

> cd %LocalAppData%\XDK> dir /s *.apk

For each version of Crosswalk that you have attempted to use (via the Debug tab), you will find a copy of the APK file (but only if you have attempted to use the Debug tab and the XDK has successfully downloaded the corresponding version of APX). You should find something similar to:

./apx_download/12.0/AppAnalyzer.apk

following the searches, above. Notice the directory that specifies the Crosswalk version (12.0 in this example). The file named AppAnalyzer.apk is APX and is what you need to install onto your Android device.

Before you install onto your Android device, you can double-check to see if APX is already installed:

  • find "Apps" or "Applications" in your Android device's "settings" section
  • find "App Preview Crosswalk" in the list of apps on your device (there can be more than one)

If you found one or more App Preview Crosswalk apps on your device, you can see which versions they are by using adb at the command-line (this assumes, of course, that your device is connected via USB and you can communicate with it using adb):

  1. type adb devices at the command-line to confirm you can see your device
  2. type adb shell 'pm list packages -f' at the command-line
  3. search the output for the word app_analyzer

The specific version(s) of APX installed on your device end with a version ID. For example:com.intel.app_analyzer.v12 means you have APX for Crosswalk 12 installed on your device.

To install a copy of APX manually, cd to the directory containing the version of APX you want to install and then use the following adb command:

$ adb install AppAnalyzer.apk

If you need to remove the v12 copy of APX, due to crowding of available storage space, you can remove it using the following adb command:

$ adb uninstall com.intel.app_analyzer.v12

or

$ adb shell am start -a android.intent.action.DELETE -d package:com.intel.app_analyzer.v12

The second one uses the Android undelete tool to remove the app. You'll have to respond to a request to undelete on the Android device's screen. See this SO issue for details. Obviously, if you want to uninstall a different version of APX, specify the package ID corresponding to that version of APX.

Why is Chrome remote debug not working with my Android or Crosswalk app?

For a detailed discussion regarding how to use Chrome on your desktop to debug an app running on a USB-connected device, please read this doc page Remote Chrome* DevTools* (CDT).

Check to be sure the following conditions have been met:

  • The version of Chrome on your desktop is greater than or equal to the version of the Chrome webview in which you are debugging your app.

    For example, Crosswalk 12 uses the Chrome 41 webview, so you must be running Chrome 41 or greater on your desktop to successfully attach a remote Chrome debug session to an app built with Crosswalk 12. The native Chrome webview in an Android 4.4.2 device is Chrome 30, so your desktop Chrome must be greater than or equal to Chrome version 30 to debug an app that is running on that native webview.
  • Your Android device is running Android 4.4 or higher, if you are trying to remote debug an app running in the device's native webview, and it is running Android 4.0 or higher if you are trying to remote debug an app running Crosswalk.

    When debugging against the native webview, remote debug with Chrome requires that the remote webview is also Chrome; this is not guaranteed to be the case if your Android device does not include a license for Google services. Some manufacturers do not have a license agreement with Google for distribution of the Google services on their devices and, therefore, may not include Chrome as their native webview, even if they are an Android 4.4 or greater device.
  • Your app has been built to allow for remote debug.

    Within the intelxdk.config.additions.xml file you must include this line: <preference name="debuggable" value="true" /> to build your app for remote debug. Without this option your app cannot be attached to for remote debug by Chrome on your desktop.

How do I detect if my code is running in the Emulate tab?

In the obsolete intel.xdk apis there is a property you can test to detect if your app is running within the Emulate tab or on a device. That property is intel.xdk.isxdk. A simple alternative is to perform the following test:

if( window.tinyHippos )

If the test passes (the result is true) you are executing in the Emulate tab.

Never ending "Transferring your project files to the Testing Device" message from Debug tab; results in no Chrome DevTools debug console.

This is a known issue but a resolution for the problem has not yet been determined. If you find yourself facing this issue you can do the following to help resolve it.

On a Windows machine, exit the Intel XDK and open a "command prompt" window:

> cd %LocalAppData%\XDK\> rmdir cdt_depot /s/q

On a Mac or Linux machine, exit the Intel XDK and open a "terminal" window:

$ find ~ -name global-settings.xdk
$ cd <location-found-above>
$ rm -Rf cdt_depot

Restart the Intel XDK and try the Debug tab again. This procedure is deleting the cached copies of the Chrome DevTools that were retrieved from the corresponding App Preview debug module that was installed on your test device.

One observation that causes this problem is the act of removing one device from your USB and attaching a new device for debug. A workaround that helps sometimes, when switching between devices, is to:

  • switch to the Develop tab
  • close the XDK
  • detach the old device from the USB
  • attach the new device to your USB
  • restart the XDK
  • switch to the Debug tab

Can you integrate the iOS Simulator as a testing platform for Intel XDK projects?

The iOS simulator only runs on Apple Macs... We're trying to make the Intel XDK accessible to developers on the most popular platforms: Windows, Mac and Linux. Additionally, the iOS simulator requires a specially built version of your app to run, you can't just load an IPA onto it for simulation.

What is the purpose of having only a partial emulation or simulation in the Emulate tab?

There's no purpose behind it, it's simply difficult to emulate/simulate every feature and quirk of every device.

Not everyone can afford hardware for testing, especially iOS devices; what can I do?

You can buy a used iPod and that works quite well for testing iOS apps. Of course, the screen is smaller and there is no compass or phone feature, but just about everything else works like an iPhone. If you need to do a lot of iOS testing it is worth the investment. A new iPod costs $200 in the US. Used ones should cost less than that. Make sure you get one that can run iOS 8.

Is testing on Crosswalk on a virtual Android device inside VirtualBox good enough?

When you run the Android emulator you are running on a fictitious device, but it is a better emulation than what you get with the iOS simulator and the Intel XDK Emulate tab. The Crosswalk webview further abstracts the system so you get a very good simulation of a real device. However, considering how inexpensive and easy Android devices are to obtain, we highly recommend you use a real device (with the Debug tab), it will be much faster and even more accurate than using the Android emulator.

Why isn't the Intel XDK emulation as good as running on a real device?

Because the Intel XDK Emulate tab is a Chromium browser, so what you get is the behavior inside that Chromium browser along with some conveniences that make it appear to be a hybrid device. It's poorly named as an emulator, but that was the name given to it by the original Ripple Emulator project. What it is most useful for is simulating most of the core Cordova APIs and your basic application logic. After that, it's best to use real devices with the Debug tab.

Why doesn't my custom splash screen does not show in the emulator or App Preview?

Ensure the splash screen plugin is selected. Custom splash screens only get displayed on a built app. The emulator and app preview will always use Intel XDK splash screens. Please refer to the 9-Patch Splash Screen sample for a better understanding of how splash screens work.

Is there a way to detect if my program has stopped due to using uninitialized variable or an undefined method call?

This is where the remote debug features of the Debug tab are extremely valuable. Using a remote CDT (or remote Safari with a Mac and iOS device) are the only real options for finding such issues. WEINRE and the Test tab do not work well in that situation because when the script stops WEINRE stops.

Why doesn't the Intel XDK go directly to Debug assuming that I have a device connected via USB?

We are working on streamlining the debug process. There are still obstacles that need to be overcome to insure the process of connecting to a device over USB is painless.

Can a custom debug module that supports USB debug with third-party plugins be built for iOS devices, or only for Android devices?

The Debug tab, for remote debug over USB can be used with both Android and iOS devices. Android devices work best. However, at this time, debugging with the Debug tab and third-party plugins is only supported with Android devices (running in a Crosswalk webview). We are working on making the iOS option also support debug with third-party plugins, like what you currently get with Android.

Why does my Android debug session not start when I'm using the Debug tab?

Some Android devices include a feature that prevents some applications and services from auto-starting, as a means of conserving power and maximizing available RAM. On Asus devices, for example, there is an app called the "Auto-start Manager" that manages apps that include a service that needs to start when the Android device starts.

If this is the case on your test device, you need to enable the Intel App Preview application as an app that is allowed to auto-start. See the image below for an example of the Asus Auto-start Manager:

Another thing you can try is manually starting Intel App Preview on your test device before starting a debug session with the Debug tab.

How do I share my app for testing in App Preview?

The only way to retrieve a list of apps in App Preview is to login. If you do not wish to share your credentials, you can create an alternate account and push your app to the cloud using App Preview and share that account's credentials, instead.

I am trying to use Live Layout Editing but I get a message saying Chrome is not installed on my system.

The Live Layout Editing feature of the Intel XDK is built on top of the Brackets Live Preview feature. Most of the issues you may experience with Live Layout Editing can be addressed by reviewing this Live Preview Isn't Working FAQ from the Brackets Troubleshooting wiki. In particular, see the section regarding using Chrome with Live Preview.

Back to FAQs Main

Improve the Security of Android* Applications using Hooking Techniques: Part 1

$
0
0

Download PDF [PDF 1.1 MB]

Contents


In the Android* development world, developers usually take advantage of third-party libraries (such as game engines, database engines, or mobile payment engines) to develop their applications. Often, these third-party libraries are closed-source libraries, so developers cannot change them. Sometimes third-party libraries introduce security issues to the applications. For example, an internal log print for debug purposes may leak the user credentials during login and payment, or some resources and scripts stored locally in clear text for a game engine can be obtained easily by an attacker.

In this article, I will share a few studies that are conducted using the hooking technique to provide a simple and effective protection solution against certain offline attacks in Android applications.

Common Security Risks in Android

Android Application and Package Overview

Android applications are commonly written in the Java* programming language. When developers need to request performance or low-level API access, they can code in C/C++ and compile into a native library, and then call it through the Java Native Interface (JNI). After that, the Android SDK tools pack all compiled code, data, and resource files into an Android Package (APK).

Android apps are packaged and distributed in APK format, which is a standard ZIP file format. It can be extracted using any ZIP tools. Once extracted, an APK file may contain the following folders and files (see Figure 1):

  1. META-INF directory
    • MANIFEST.MF — manifest file
    • CERT.RSA — certificate of the application
    • CERT.SF — list of resources and SHA-1 digest of the corresponding lines in the MANIFEST.MF file
  2. classes.dex — Java classes compiled in the DEX file format understandable by the Dalvik virtual machine
  3. lib — directory containing the compiled code that is specific to a software layer of a processor, with these subdirectories
    • armeabi — compiled code for all ARM*-based processors
    • armeabi-v7a — compiled code for all ARMv7 and above-based processors
    • x86 — compiled code for Intel® x86 processors
    • mips — compiled code for MIPS processors
  4. assets — directory containing applications assets, which can be retrieved by AssetManager
  5. AndroidManifest.xml — an additional Android manifest file, describing the name, version, access rights, referenced library files for the application
  6. res — directory where all application resources are placed
  7. resources.arsc — file containing precompiled resources

 The content of an Android* APK package
Figure 1:The content of an Android* APK package

Once the package is installed on the user’s device, its files are extracted and placed in the following directories:

  1. The entire app package file is copied to /data/app
  2. The classes.dex is extracted and optimized, and then the optimized file is copied to the /data/dalvik-cache
  3. The native libraries are extracted and copied to /data/app-lib/<package-name>
  4. A folder named /data/data/<package-name> is created and assigned for the application to store its private data

Risk Awareness in Android Development

By analyzing the folder and file structure given in the previous section, applications have several vulnerable points that developers should be aware of. An attacker can get a lot of valuable information by exploiting these weaknesses.

One vulnerable point is that the application stores raw data in the ‘asset’ folder, for example, the resources used by a game engine. This includes the audio and video materials, the game logic script files, and the texture resource for the spirits and scenes. Because the Android app package is not encrypted, an attacker can get these resources easily by getting the package from the app store or from another Android device.

Another vulnerable point is weak file access controls for the rooted device and external storage. An attacker can get the application’s private data file via root privilege of the victim’s device, or the application data is written to the external storage such as an SD card. If the private data was not well protected, attackers can get some information such as user account information and passwords from the file.

Finally, the debug information might be visible. If developers forget to comment the relevant debugging code before publishing applications, attackers can retrieve debug output by using Logcat.

Hooking Technique Overview

What is Hooking?

Hooking is a term for a range of code modification techniques that are used to change the behavior of the original code running sequence by inserting instructions into the code segment at runtime (Figure 2 sketches the basic flow of hooking).

 Hook can change the running sequence of the program
Figure 2:Hook can change the running sequence of the program

In this article, two type of hooking techniques are investigated:

  1. Symbol table redirection

Analyzing the symbol table of the dynamic-link library, we can find all relocation addresses of the external calling function Func1(). We then patch each relocation address to the start address of the hooking function Hook_Func1() (see Figure 3).

 The flow of symbol table redirection
Figure 3:The flow of symbol table redirection

  1. Inline redirection

Unlike the symbol table redirection that must modify every relocation address, the inline hooking only overwrites the start bytes of the target function we want to hook (see Figure 4). The inline redirection is more robust than the symbol table hooking because it does one change working at any time. The downside is that if the original function is called at any place in the application, it will then also execute the code in the hooked function. So we must identify the caller carefully in the redirected function.

 The flow of inline redirection
Figure 4:The flow of inline redirection

Implementing Hooking

Since the Android OS is based on the Linux* kernel, many of the studies of Linux apply to Android as well. The examples detailed here are based on Ubuntu* 12.04.5 LTS.

Inline Redirection

The simplest way to create an inline redirection is to insert a JMP instruction at the start address of the function. When the code calls the target function, it will jump to the redirect function immediately. See the example shown in Figure 5.

In the main process, the code runs func1() to process some data, then returns to the main process. The start address of func1() is 0xf7e6c7e0.

 Inline hooking with use the first five bytes of the function to insert JMP instruction
Figure 5: Inline hooking with use the first five bytes of the function to insert JMP instruction

The inline hooking injection process replaces the first five bytes of data in the address with 0xE9 E0 D7 E6 F7. The process creates a jump instruction that executes a jump to the address 0xF7E6D7E0, the entrance of the function called my_func1(). All code calls to func1() will be redirected to my_func1(). The data input to my_func(1) goes through a pre-processing stage then passes the processed data to the func1() to complete the original process. Figure 6 shows the code running sequence after hooking func1(). Figure 7 gives the pseudo C code of func1() after hooking.

 Insert my_func1() in func1()
Figure 6:Usage of hooking: Insert my_func1() in func1()

Using this method, the original code will not be aware of the change of the data processing flow. But more processing code has been appended to the original function func1(). Developers can use this technique to add patches to the function at runtime.

 the pseudo C code of Figure 6
Figure 7:Usage of hooking: the pseudo C code of Figure 6

Symbol Table Redirection

Compared to inline redirection, symbol table redirection is more complicated. The relevant hooking code has to parse the entire symbol table, handle all possible cases, search and replace the relocation function addresses one by one. The symbol table in the DLL (Dynamic Link Library) will be very different, depending on what compiler parameters are used as well as how developers call the external function.

To study all the cases regarding the symbol table, a test project was created that includes two dynamic libraries compiled with different compiler parameters:

  1. The Position Independent Code (PIC) object — libtest_PIC.so
  2. The non-PIC object — libtest_nonPIC.so

Figures 8-11 show code execution flow of the test program, the source code of libtest1()/libtest2() which are exactly the same function except compiled with different compiler parameters, and output of the program.

 Software working flow of the test project
Figure 8:Software working flow of the test project

The function printf() is used for hooking. It is the most used function for printing information to the console. It is defined in stdio.h, and the function code is located in glibc.so.

In the libtest_PIC and libtest_nonPIC libraries, three external function-calling conventions are used:

  1. Direct function call
  2. Indirect function call
    • Local function pointer
    • Global function pointer

 The code of libtest1()
Figure 9:The code of libtest1()

 The code of libtest2(), the same as libtest1()
Figure 10:The code of libtest2(), the same as libtest1()

 The output of the test program
Figure 11:The output of the test program

Study of the Non-PIC Code in libtest_nonPIC.so

A standard DLL object file is composed of multiple sections. Each section has its own role and definition. The .rel.dyn section contains the dynamic relocation table. And the section information of the file can be disassembled by the command objdump –D libtest_nonPIC.so.

In the relocation section .rel.dyn of libtest_nonPIC.so (see Figure 12), there are four places that contain the relocation information of the function printf(). Each entry in the dynamic relocation section includes the following types:

  1. The value in the Offset identifies the location within the object to be adjusted.
  2. The Type field identifies the relocation type. R_386_32 is a relocation that places the absolute 32-bit address of the symbol into the specified memory location. R_386_PC32 is a relocation that places the PC-relative 32-bit address of the symbol into the specified memory location.
  3. The Sym portions refer to the index of the referenced symbol.

The Figure 13 shows the generated assembly code of function libtest1(). The entry addresses of printf() marked with red color are specified in the relocation section .rel.dyn in Figure 12.

 Relocation section information of libtest_nonPIC.so
Figure 12:Relocation section information of libtest_nonPIC.so

 Disassemble code of libtest1(), compiled in non-PIC format
Figure 13:Disassemble code of libtest1(), compiled in non-PIC format

To redirect the printf() to another function called hooked_printf(), the hooking function should write the address of the hooked_printf() to these four offset addresses.

 Working flow of &#039;printf(&quot;libtest1: 1st call to the original printf()\n&quot;);&#039;
Figure 14: Working flow of 'printf("libtest1: 1st call to the original printf()\n");'

 Working flow of &#039;global_printf1(&quot;libtest1: global_printf1()\n&quot;);&#039;
Figure 15:Working flow of 'global_printf1("libtest1: global_printf1()\n");'

 Working flow of &#039;local_printf(&quot;libtest1: local_printf()\n&quot;);&#039;
Figure 16:Working flow of 'local_printf("libtest1: local_printf()\n");'

As shown in Figures 14-16, when the linker loads the dynamic library to memory, it first finds the name of relocated symbol printf, then it writes the real address of the printf to the corresponding addresses (offset 0x4b5, 0x4c2,0x4cf and 0x200c). These corresponding addresses are defined in the relocation section .rel.dyn. After that, the code in libtest1() can jump to the printf() properly.


Go To Part 2 ››

Improve the Security of Android* Applications using Hooking Techniques: Part 2

$
0
0

Download PDF [PDF 1.2 MB]

Contents


Study of the PIC Code in libtest_PIC.so

If the object is compiled in PIC mode, relocation is implemented differently. By observing the sections information of the libtest_PIC.so which is shown in Figure 17, the printf() relocation information is located in two relocation sections: .rel.dyn and .rel.plt. Two new relocation types R_386_GLOB_DAT and R_386_JMP_SLOT are used, and the absolute 32-bit address of the substituted function should be filled in with these offset addresses.

 Relocation section of libtest_PIC.so
Figure 1:Relocation section of libtest_PIC.so

The Figure 18 shows the assembly code of function libtest2() which is compiled in non-PIC mode. The entry addresses of printf() marked with red color are specified in the relocation sections .rel.dyn and .rel.plt in Figure 17.

 Disassemble code of libtest2(), compiled with -PIC parameter
Figure 2:Disassemble code of libtest2(), compiled with -PIC parameter

 Working flow of &#039;printf(&quot;libtest2: 1st call to the original printf()\n&quot;);&#039;
Figure 3:Working flow of 'printf("libtest2: 1st call to the original printf()\n");'

 Working flow of &#039;global_printf2(&quot;libtest2: global_printf2()\n&quot;);&#039;
Figure 4:Working flow of 'global_printf2("libtest2: global_printf2()\n");'

 Working flow of &#039;local_printf(&quot;libtest2: local_printf()\n&quot;);&#039;
Figure 5:Working flow of 'local_printf("libtest2: local_printf()\n");'

From Figures 19-21, it can be seen that when working with the dynamic library generated with the -PIC parameter, the code in libtest2() will jump to the address placed in offset addresses 0x1fe0, 0x2010, and 0x2000, which are the entrances to printf().

Hook Solution

If the hook module wants to intercept the calls to printf() and redirect to another function, it should write the redirected function address to the offset addresses of the symbol ‘printf’ defined in the relocation sections, after the linker loaded the dynamic library into memory.

To replace the call of the printf() function with the call of the redirected hooked_printf() function, as shown in the software flow diagram in Figure 22, a hook function should be implemented between the dlopen() and libtest() calls. The hook function will first get the offset address of symbol printf, which is 0x1fe0 from the relocation section named .rel.dyn. The hook function then writes the absolute address of hooked_printf() function to the offset address. After that, when the code in libtest2() calls into the printf(), it will enter the hooked_printf() instead.

 Example of how the hook function intercepts the call to printf() and reroutes the call to hooked_printf(). The original function calling process is described in Figure 21.
Figure 6:Example of how the hook function intercepts the call to printf() and reroutes the call to hooked_printf(). The original function calling process is described in Figure 21.

To consider all the possible cases previously listed, the entire flow chart of the hook function is shown in Figure 23. And the part of the change in main() function is depicted in Figure 24.

 The flow chart of ELF hook module
Figure 7: The flow chart of ELF hook module

 Code in main() after hooking
Figure 8:Code in main() after hooking

The output of the program is shown in Figure 25, you can see that when the first call to libtest1()/libtest2() executes, the printf() is called inside the functions. When calling the two functions again, after the hook functions are executed, the calls to the printf() are redirected to the hooked_printf() function. The hooked_printf() function will attach the string “is HOOKED” at the end of the normal printed string. Figure 26 shows the program running flow after hooking, compare with the original flow shown in Figure 8, the hooked_printf() has been injected into libtest1() and libtest2().

 Output of the test program, printf() has been hooked
Figure 9:Output of the test program, printf() has been hooked

 The running flow of the test project after hooking
Figure 10:The running flow of the test project after hooking

Case Study – a Hook-Based Protection Scheme in Android

Based on the studies of the hooking technique in the previous sections, we developed a plug-in to help Android application developers improve the security of their applications. Developers need to add only one Android native library to their projects and add one line of Java code to load this native library at start-up time. Then this library injects some protection code to other third-party libraries in the application. The protection code will aid encrypting the local file's input/output stream, as well as bypass the function __android_log_print() to avoid some user privacy leakage by printing debugging information through Logcat.

To verify the effectiveness of the protection plug-in, we wrote an Android application to simulate a scene of an application that contains a third-party library. In the test application, the third party library does two things:

  1. When an external Java instruction calls the functions in the library, it will print some information by calling __android_log_print().
  2. In the library, the code creates a file (/sdcard/data.dat) to save data in local storage without encryption, then reads it back and prints it on the screen. This action is to simulate the application trying to save some sensitive information in the local file system.

Figures 27-30 compare the screenshots of the test program, output of Logcat, and the content of the saving file in the device’s local file system before and after hooking.

 The Android* platform is Teclast X89HD, Android 4.2.2
Figure 11:The Android* platform is Teclast X89HD, Android 4.2.2

 App output - no change after hooking
Figure 12:App output - no change after hooking

 Logcat output - empty after hooking
Figure 13:Logcat output - empty after hooking

 Local file ‘data.dat’ at /sdcard has been encrypted after hooking
Figure 14:Local file ‘data.dat’ at /sdcard has been encrypted after hooking

As the figures show, the running flow of the program after hooking is the same as the one without hooking. However, the Logcat cannot catch any output from the native library after hooking. Further, the content of the local file is no longer stored in a plain text format.

The plug-in helps the test application improve security against malicious attacks on collecting information via Logcat, as well as offline attacks to the local file system.

Conclusion

The hooking technique can be used in many development scenarios, providing seamless security protection to Android applications. Hook-based protection schemes can not only be used on Android, but also can be expanded to other operating systems such as Windows*, Embedded Linux, or other operating systems designed for Internet of Things (IoT) devices. It can significantly reduce the development cycle as well as maintenance costs. Developers can develop their own hook-based security scheme or use the professional third-party security solutions available on the market.

References

Redirecting functions in shared ELF libraries
Apriorit Inc, Anthony Shoumikhin, 25 Jul 2013
http://www.codeproject.com/Articles/70302/Redirecting-functions-in-shared-ELF-libraries

x86 API Hooking Demystified
Jurriaan Bremer
http://jbremer.org/x86-api-hooking-demystified/

Android developer guide
http://developer.android.com/index.html

Android Open Source Project
https://source.android.com/

About the Author

Jianjun Gu is a senior application engineer in the Intel Software and Solutions Group (SSG), Developer Relations Division, Mobile Enterprise Enabling team. He focuses on the security and manageability of enterprise application.

Intel® VTune™ Amplifier Tutorials

$
0
0

The following tutorials are quick paths to start using the Intel® VTune™ Amplifier. Each demonstrates an end-to-end workflow you can ultimately apply to your own applications.

NOTE:

Apart from the analysis and target configuration details, most of the VTune Amplifier XE tutorials are also applicable to the VTune Amplifier for Systems. The Finding Hotspots on the Intel Xeon Phi coprocessor tutorial is applicable only to the VTune Amplifier XE.

VTune Amplifier XE Tutorials

Take This Short TutorialLearn To Do This

Finding Hotspots
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Fortran Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: nqueens_fortran

Identify where your application is spending time, detect the most time-consuming program units and how they were called.

Finding Hotspots on the Intel® Xeon Phi™ Coprocessor
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: matrix_vtune_amp_xe

Identify where your native Intel Xeon Phi coprocessor-based application is spending time, estimate code efficiency by analyzing hardware event-based metrics.

Analyzing Locks and Waits
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Identify locks and waits preventing parallelization.

Identifying Hardware Issues
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: matrix_vtune_amp_xe

Identify the hardware-related issues in your application such as data sharing, cache misses, branch misprediction, and others.

VTune Amplifier for Systems Tutorials

Take This Short TutorialLearn To Do This

Finding Hotspots on a Remote Linux* System
Duration: 10-15 minutes

C++ Tutorial
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Configure and run a remote Advanced Hotspots analysis on a Linux target system.

Finding Hotspots on an Android* Platform
Duration: 10-15 minutes

C++ Tutorial
Windows* OS: HTML | PDF
Linux* OS: HTML | PDF
Sample code: tachyon_vtune_amp_xe

Configure and run a remote Basic Hotspots analysis on an Android target system.

Analyzing Energy Usage on an Android* Platform
Duration: 10-15 minutes

Tutorial
Linux* OS: HTML | PDF
Windows* OS: HTML | PDF

Use the Intel Energy Profiler to run the Energy analysis with the Intel SoC Watch collector directly in the target Android system and view the collected data with the VTune Amplifier for Systems installed on the host Windows* or Linux* system.

Analyzing Energy Usage on a Windows* Platform
Duration: 20-30 minutes

Tutorial
Windows* OS: HTML | PDF
Sample code: Pi_Console.exe

Use the Intel Energy Profiler to run energy analysis of an idle system and a sample application with the Intel SoC Watch collector directly in the target Windows* system. Copy the results to the Windows host system and view the collected data with VTune Amplifier for Systems.

Android* and Crosswalk Cordova Version Code Issues

$
0
0

The release of Apache* Cordova* CLI 5 by the Apache Cordova project resulted in a change to how the android:versionCode parameter is calculated for apps built with the Intel® XDK using CLI 5. The android:versionCode is found in the AndroidManifest.xml file of every Android* and Android-Crosswalk APK; it is directly derived from the App Version Code field in the Build Settings section of the Projects tab:

If you have never published an app to an Android* store this change in behavior will have no impact on you. It might prevent side-loading an update to your app; in which case, simply uninstall the previously side-loaded app before installing your updated app.

New (CLI 5) App Version Code Algorithm for Android

Beginning with Cordova CLI 5, in order to maintain compatibility with standard Cordova, the Intel XDK no longer modifies the android:versionCode when building for Android-Crosswalk. Instead, the new Cordova CLI 5 encoding technique has been adopted for all Android builds. This change results in a discrepancy in the value of the android:versionCode that is inserted into your Android APK files when compared to building with CLI 4.1.2 (and earlier).

Here's what Cordova CLI 5 (Cordova-Android 4.x) does with the android:versionCode (App Version Code) number when you perform a Crosswalk build:

  • multiplies your android:versionCode by 10
  • adds 2 to the android:versionCode for Crosswalk ARM builds
  • adds 4 to the android:versionCode for Crosswalk x86 builds

Here's what Cordova CLI 5 (Cordova-Android 4.x) does with the android:versionCode (App Version Code) number when you perform a standard Android build (a non-Crosswalk build):

  • multiplies your android:versionCode by 10
  • adds 0 to the android:versionCode if the Minimum Android API is < 14
  • adds 8 to the android:versionCode if the Minimum Android API is 14-19
  • adds 9 to the android:versionCode if the Minimum Android API is > 19 (i.e., >= 20)

So this means the Android store will find the following android:versionCode values inside your built APK if you set the App Version Code field to one in the Build Settings section of the Projects tab:

  • an App Version Code = "1" in the Build Settings section of Intel XDK Projects tab results in:
  • android:versionCode = "10" for a regular Android build if the android:minSdkVersion is < 14
  • android:versionCode = "12" for an Android-Crosswalk embedded library ARM build
  • android:versionCode = "14" for an Android-Crosswalk embedded library x86 build
  • android:versionCode = "18" for a regular Android build if the android:minSdkVersion is 14-19
  • android:versionCode = "19" for a regular Android build if the android:minSdkVersion is > 19

NOTE: the Minimum Android* API field in the Build Settings section of the Projects tab corresponds to the value of the android:minSdkVersion number referenced in the bullets above.

This scheme results in an x86 APK file that contains a version code that is greater than the ARM APK file. This condition is necessary to insure that the Android store delivers the appropriate architecture APK file to the requesting device. In this scheme, the x86 APK version code is always two greater than the ARM APK version code.

NOTE: the Intel XDK build system generates two APK files (one marked x86 and one marked ARM) when you elect the shared Crosswalk build option, even though only one APK is required. The only difference between these two shared model APK files is the version code, which follows the same android:versionCode scheme as the embedded Crosswalk build option.

If you HAVE PREVIOUSLY PUBLISHED an Android-Crosswalk app to an Android store, built with the Intel XDK CLI 4.1.2 option (or earlier), the new android:versionCode scheme described above may impact your ability to publish an update of your app! If you encounter that case, add 6000 (six with three zeroes) to your existing App Version Code field in the Build Settings section of the Projects tab. Your Crosswalk apps that were built with CLI 4.1.2 used a system that adds 60000 and 20000 (six with four zeros and two with four zeroes) to the android:versionCode. That scheme is described in more detail, below.

If you have only published standard Android apps (non-Crosswalk) in the past, and are still publishing standard Android apps, you should not have to make any changes to the App Version Code field in the Android Builds Settings section of the Projects tab (other than increasing it by one for a new version).

NOTE:

  • Android API 14 corresponds to Android 4.0
  • Android API 19 corresponds to Android 4.4
  • Android API 20 corresponds to Android 5.0
  • CLI 5 and above (Cordova-Android 4.x) does not support Android 2.x or Android 3.x

Historic (CLI 4) App Version Code Algorithm for Android

In the past (CLI 4.1.2 and earlier), standard Cordova did not modify the Android version code, so the android:versionCode found in your Android APK by the store was identical to the value provided in the App Version Code field. In order to support the submission of multiple Android-Crosswalk APK files (e.g., ARM and x86) to an Android store, the Intel XDK build system did modify the version code for Android-Crosswalk embedded builds (and only for Android-Crosswalk embedded builds, Crosswalk shared and regular Android build version codes were not modified by the build system).

The "historic" behavior of the Intel XDK build system regarding the Android version code that was inserted into your APK files, when built using CLI 4.1.2 (or earlier) was:

  • no change to the version code for regular Android builds (android:versionCode = App Version Code)
  • no change to the version code for Android-Crosswalk shared library builds
  • add 60000 to the version code for Android-Crosswalk x86 embedded library builds
  • add 20000 to the version code for Android-Crosswalk ARM embedded library builds

So this meant that you would find the following android:versionCode values inside your APK files, if you had set the App Version Code field to a value of one in the Build Settings section of the Projects tab and set the CLI version to 4.1.2:

  • an App Version Code = "1" in the Build Settings section of Intel XDK Projects tab results in:
  • android:versionCode = "1" for a regular Android build
  • android:versionCode = "1" for an Android-Crosswalk shared library build
  • android:versionCode = "20001" for an Android-Crosswalk embedded library ARM build
  • android:versionCode = "60001" for an Android-Crosswalk embedded library x86 build

This scheme was used to insure that the x86 APK contained a higher version code than the equivalent ARM APK, and also so that both APKs contained larger version codes than a generic Android APK that you might use, for example, to provide an app to Android 2.x and 3.x devices (because Android-Crosswalk requires at least Android 4.0). Putting a larger version code into the x86 APK makes sure that the Android store will deliver the x86 APK to x86 devices, rather than incorrectly delivering an ARM APK to x86 devices.

Garbage Collection Workload for Android*

$
0
0

Download Document

A New Way to Measure Android Garbage Collection Performance

As the mobile computing market evolves, Google Android* has become one of the most popular software stacks for smart mobile devices. Because Java* is the primary implementation language for Android applications, the Java Virtual Machine (JVM) is key to providing the best Android user experience.

The garbage collector (GC) component of JVM is one of its most important. It provides automatic memory management, and ensures that Java programmers cannot accidentally (or purposely) crash the JVM by incorrectly freeing memory, improving both programmer productivity and system security. GC keeps track of the objects currently referenced by the Java program, so it can reclaim the memory occupied by unreferenced objects, also known as “garbage.” Accurate and efficient garbage collection is critical to reaching high performance and to the user experience of Android applications.

GC may interfere with user thread execution for garbage collection by introducing overhead and hence hurting performance and user experience. A typical example is that most GC algorithms require pausing all Java threads in an application at some point in order to guarantee that only garbage object memory is reclaimed. If the pause time is long, it can cause performance and user experience issues such as jank (unresponsive user interface or momentary sluggishness) and lack of responsiveness and smoothness. In addition, GC is usually triggered automatically by the JVM in the background, so programmers have little or no control over GC scheduling. Android developers must be aware of this hidden software component.

To achieve the best performance and user experience by optimizing GC, a workload that reflects GC performance is indispensable. However, in our experience most popular Android workloads (gaming, parsing, security, etc.) stress GC only intermittently. Intel developed Garbage Collection Workload for Android (GCW for Android) to analyze GC performance and its influence on Android performance and user experience.

GCW for Android stresses the memory management subsystem of the Android Runtime (ART). It is designed to be representative of the peak memory use behavior of Android applications, so optimizations based on GCW for Android analysis not only improve the workload score but also improve user experience. Further, GCW for Android provides options for you to adjust its behavior, making it flexible enough to mimic different kinds of application behavior.

GCW for Android Overview

Intel developed GCW for Android based on the analysis of real-world applications, including typical Android applications in categories such as business, communications, and entertainment. GCW for Android is an object allocation and GC intensive workload designed for GC performance evaluation and analysis.

The workload has two working modes. You can run it from the command line or control it using a GUI. GCW for Android is configurable. You can specify the workload size, number of allocation threads, object allocation size distribution, and object lifetime distribution to fit different situations. It provides flexibility to test GC with different allocation behaviors, so you can use GCW for Android to analyze GC performance for most usage situations.

GCW for Android incorporates several metrics. It reports the total execution time as the primary metric of GC and object allocation efficiency. The workload also reports memory usage information such as the Java heap footprint and total allocated object bytes based on the Android logging system.

How to Run GCW for Android

For the Android platform, GCW for Android is provided as a single package: GCW_Android.apk. After installing the apk, clicking the GCW_Android icon launches the workload and displays a UI that includes Start and Setting buttons.


Figure 1. GCW for Android Launch UI

Clicking the “Start” button will run the workload using the default profile settings. The “default” settings option is used to reset the workload profile to the default. If you want to change the configuration, click the “Settings” button.


(a)


(b)
Figure 2. Configuration UI. (a) is the top part (b) is the bottom part

The first setting is a selectable list of profiles. For now, “default” is the only option. A profile consists of the parameters used by the workload, and the default profile is derived from the characteristics of several real applications.

Total Object size: Allows you to define the total object size allocated in one iteration by all allocation threads. The default is 100MB, which means that when running in multi-thread mode with four threads, each thread will allocate 25MB’s worth of objects in one iteration in the stress test phase.

Bucket size: allows you to define the size of the binary trees that are built in a single allocation phase. The default is 1MB.

Large object size: allows you to define the large object size. The default is 12KB, which is the minimum size object that ART allocates in the large object space.

Object size distribution: allows you to define the size distribution of allocated objects. The total sum should be 100%.

Large object element type distribution: allows you to define the lifetime for each object. An object’s lifetime is defined in units of the size of the objects (1MB by default) allocated from when it is created to when it is made unreachable. The first item in the lifetime data is the long lived object percentage (the percentage of objects that live for the entire workload run), the second is the percentage of objects that die after the first period, the third is the percentage of objects that die after the second period (after allocating another 1MB worth of objects), and the K’th item is the percentage of objects that die after the K+1’st period (after allocating K MB’s worth of objects). Items are separated by commas and each line should have the same number of items.

By default, the workload runs in multi-thread mode. If you want to run in single-thread mode, check "Run in single thread?"

To understand how GCW for Android reflects the JVM’s memory management characteristics and is representative of real applications, let’s go deeper to see how GCW for Android is designed.

GCW for Android Design

GCW for Android is designed to mimic the JVM memory management behavior that real applications exhibit. Detailed analysis of different user scenarios on a large number of popular Android applications indicate that Java programs create various sized objects with varying lifetimes, so the workload does too. Also, the default relationship between object sizes and lifetimes (small objects tend to have short lifetimes) and the multi-threaded allocation behavior are similar to those of real applications.

Abstracted from the analysis data, the following characteristics were chosen as the primary design points for GCW for Android.

Object size distribution

Figure 3 shows a histogram of object size distribution based on 17 popular Google Play* store applications. The X-axis is the percent of all objects of a given object size range and the Y-axis is the object size range buckets.


Figure 3. Object size distribution of popular apps

When running, around 80% of the objects allocated are small objects whose size is less or equal to 64 bytes. We observed the same behavior on ~50 more popular apps, so GCW for Android uses objects with many different object sizes. How GCW for Android models object size will be discussed in the next section “GCW for Android Workflow.”

Object lifetime

Object lifetime is measured by how many garbage collections an object survives in the Java heap. Understanding object lifetime can help the JVM developer optimize GC performance by optimizing how GC works and tuning GC-related options.

In our investigations, the lines plotting object size against object lifetime show a loose relationship between the two, but lifetime does seem to be related to the object size. Here we choose Gallery3D (Figure 4) and Google Maps* (Figure 5) as examples. The X-axis is the percentage of objects that die; the Y-axis is the number of GCs that survive. Each line represents an object size range.


Figure 4. Gallery3D Object Lifetimes


Figure 5. Google Maps*mapping serviceObject Lifetimes

Most objects die after one to three GCs, but the lines aren’t quite congruent. For Google Maps, ~80% of the objects between 1-16 bytes die after the first GC, but only 60% of the objects between 33-64 bytes die after the first GC. Different-sized objects can have different lifetimes, so how well GCW for Android reflects object lifetime is an important design goal. How GCW for Android models object lifetime will be discussed in the next section “GCW for Android Workflow.”

Multi-threading

Our investigation shows that most Android Java applications are multi-threaded, though the threads do not typically communicate much with each other. Each Java application running on an Android device may have more than one thread allocating objects in the Java heap simultaneously. GCW for Android supports multi-threaded allocation in order to mimic this real application characteristic. How GCW for Android supports multi-threading will be explained in next section “GCW for Android Workflow.”

To summarize, the following workload characteristics are emulated in GCW for Android. Typical Android applications:

  • Allocate varied size objects.
  • Have similar allocated object size distributions.
  • Have allocated objects with different lifetimes, and their lifetimes seem to be related to their size.
  • Allocate objects in parallel in multiple threads.

Putting all these observations together, we designed the GCW for Android workflow to make it not only a JVM memory management workload, but also one that reflects actual usage scenarios.

GCW for Android Workflow

GCW for Android supports two threading modes: single- and multi-thread. While in multi-thread mode each thread is assigned a number, which is by default the logical CPU number. You can change it.


(a)


(b)
Figure 6. GCW for Android Workflow

Internally, GCW for Android builds several binary trees in order to manage object sizes and lifetimes. It builds and deletes objects by inserting and deleting nodes in the trees, as shown in Figure 6(b).

Figures 6(a) and (b) show how GCW for Android works. It first launches a certain number of threads, and then each thread follows the same logic: allocate long lived objects and then stress test. The stress test is the most time consuming part of the workload and is a big loop that iterates a configurable number of times, by default 100. In each iteration, GCW for Android allocates a configurable number of bytes (by default 100MB), which by default includes 6 kinds of small objects (16, 24, 40, 96, 192, and 336 byte objects) plus large objects (12K bytes, configurable). Small objects are created as nodes of binary trees; large objects are created as byte, char, int, or long arrays. In each iteration of GCW for Android, the workload:

  • Builds binary trees.
  • Deletes some nodes from built trees according to the lifetimes of small objects.
  • Builds large object arrays.
  • Deletes some arrays according to lifetimes of large objects.

Now let’s take a closer look at the internal design of GCW for Android to understand how it achieves the aforementioned characteristics of emulating memory system use.

Object Size Distribution

To simulate common object size distribution patterns, GCW for Android internally defines seven object size buckets:

  • 16 byte    →   [1-16B]
  • 24 byte    →   [17-32B]
  • 40 byte    →   [33-64B]
  • 96 byte    →   [65-128B]
  • 192 byte  →   [129-256B]
  • 336 byte  →   [257-512B]
  • Large object (default is 12KB)

To conveniently abstract object references, GCW for Android allocates small sized objects as nodes in a binary tree, and large objects are created as arrays. There are four types of large object arrays: byte, char, int, and long.

Now let’s see how GCW for Android manages object lifetimes to mimic real applications.

Object Lifetime Control

GCW for Android internally uses binary trees to control object lifetime. Every binary tree has a predefined lifetime. GCW for Android controls tree lifetimes and therefore object lifetimes.

For example, suppose you want your object lifetime model to have three stages where 50% of objects die in period K, 25% of objects die after that in period K+1, and the remaining 25% live through period K+2.

At the beginning of period K, GCW for Android builds three trees simultaneously, one tree for each lifetime stage. In period K, one tree holding 50% of the objects is made unreachable by assigning null to the root object, which makes the whole tree unreachable from the GC point of view. So 50% of the objects are collected by GC after period K. Then in period K+1, the second tree holding 25% of objects is rendered unreachable by setting the root to null, thus causing 25% of the objects to be reclaimed after period K+1 and leaving 25% of the objects alive through period K+2 (see Figure 7).


Figure 7. Object lifetime control

In different use scenarios, object lifetime is not deterministic, so GCW for Android also does not make object lifetimes deterministic. To emulate real applications, GCW for Android generates a random number for every thread, each with a different seed, in order to decide object size and the lifetime stage of a node. Here is an example.

In this example three 24-byte objects will die after stage one, and the other objects will die after stage 0.

To recap, GCW for Android makes it easy to reflect the object allocation and GC behavior of real applications. Using GCW for Android can help you identify many opportunities in the ART to enhance performance and user experience.

Opportunities Discovered Using GCW for Android

During our performance investigation we discovered that object allocation is typically the hottest part of an application. Further investigation showed that allocation can be made faster by inlining the RosAlloc allocation fast path, which means the call to the allocation function is eliminated. That resulted in ~7% improvement on GCW for Android. The implementation has been merged into the Android Open Source Project (AOSP).

Additionally, we found that GC marking time can be reduced by eliminating unnecessary card table processing between immune spaces. The card table is a mechanism that records cross- space object references, which guarantee the correctness of GC when it collects only a subset of the Java heap (the subsets are called “spaces”) instead of the whole heap. Analysis also revealed that GC pause time can be reduced by clearing dirty cards for allocation spaces during partial and full GCs. Minimizing pause time is critical for end user apps because reducing pause time can reduce jank and improve smoothness.

GCW for Android also helped us find GC optimization opportunities such as parallel marking and parallel sweeping. Intel has contributed several of these optimizations to AOSP and the rest have been added to the Intel ART binaries. Altogether, these optimizations have resulted in ~20% improvement on GCW for Android. All have helped improve Intel product performance and user experience.

Open Source GCW for Android

We have submitted GCW for Android for upstreaming into AOSP to make it available to the entire Android development community. Our goal has been to make GCW for Android the most realistic Android Java memory system workload and use it to drive GC optimization on all platforms to improve Android performance and user experience.

Downloads

The code can be downloaded from Google at

https://android-review.googlesource.com/#/c/167279/

Conclusion

GCW for Android is a JVM memory management workload that is designed to emulate how real applications use Java memory management. It is intended to help both JVM and application developers optimize memory management and user experience on Android. Intel is using it to improve product performance and user experience by identifying optimization opportunities in ART. We hope it becomes an important indicator of performance and user experience on Android.

References

Java Virtual Machine (JVM): https://en.wikipedia.org/wiki/Java_virtual_machine
Garbage Collection (GC): https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)
Android Runtime (ART): https://source.android.com/devices/tech/dalvik/
Inside the Java Virtual Machine: https://www.artima.com/insidejvm/ed2/index.html

Acknowledgements (alphabetical)

Jean Christophe Beyler, Dong Yuan Chen, Haitao Feng, Jean-Philippe Halimi, Paul Hohensee, Aleksey Ignatenko, Rahul Kandu, Lei Li, Desikan Saravanan, Kumar Shiv, and Sushma Kyasaralli Thimmappa.

About the Authors

Li Wang is a software engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). She focuses on Android workload development and memory management optimization in the Android Java runtime.

Lin Zang is a software engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). He focuses on memory management optimization and functional stability in the Android Java runtime.


Improve the Security of Android* Applications using Hooking Techniques: Part 1

$
0
0

Download PDF [PDF 1.1 MB]

Contents


In the Android* development world, developers usually take advantage of third-party libraries (such as game engines, database engines, or mobile payment engines) to develop their applications. Often, these third-party libraries are closed-source libraries, so developers cannot change them. Sometimes third-party libraries introduce security issues to the applications. For example, an internal log print for debug purposes may leak the user credentials during login and payment, or some resources and scripts stored locally in clear text for a game engine can be obtained easily by an attacker.

In this article, I will share a few studies that are conducted using the hooking technique to provide a simple and effective protection solution against certain offline attacks in Android applications.

Common Security Risks in Android

Android Application and Package Overview

Android applications are commonly written in the Java* programming language. When developers need to request performance or low-level API access, they can code in C/C++ and compile into a native library, and then call it through the Java Native Interface (JNI). After that, the Android SDK tools pack all compiled code, data, and resource files into an Android Package (APK).

Android apps are packaged and distributed in APK format, which is a standard ZIP file format. It can be extracted using any ZIP tools. Once extracted, an APK file may contain the following folders and files (see Figure 1):

  1. META-INF directory
    • MANIFEST.MF — manifest file
    • CERT.RSA — certificate of the application
    • CERT.SF — list of resources and SHA-1 digest of the corresponding lines in the MANIFEST.MF file
  2. classes.dex — Java classes compiled in the DEX file format understandable by the Dalvik virtual machine
  3. lib — directory containing the compiled code that is specific to a software layer of a processor, with these subdirectories
    • armeabi — compiled code for all ARM*-based processors
    • armeabi-v7a — compiled code for all ARMv7 and above-based processors
    • x86 — compiled code for Intel® x86 processors
    • mips — compiled code for MIPS processors
  4. assets — directory containing applications assets, which can be retrieved by AssetManager
  5. AndroidManifest.xml — an additional Android manifest file, describing the name, version, access rights, referenced library files for the application
  6. res — directory where all application resources are placed
  7. resources.arsc — file containing precompiled resources

 The content of an Android* APK package
Figure 1:The content of an Android* APK package

Once the package is installed on the user’s device, its files are extracted and placed in the following directories:

  1. The entire app package file is copied to /data/app
  2. The classes.dex is extracted and optimized, and then the optimized file is copied to the /data/dalvik-cache
  3. The native libraries are extracted and copied to /data/app-lib/<package-name>
  4. A folder named /data/data/<package-name> is created and assigned for the application to store its private data

Risk Awareness in Android Development

By analyzing the folder and file structure given in the previous section, applications have several vulnerable points that developers should be aware of. An attacker can get a lot of valuable information by exploiting these weaknesses.

One vulnerable point is that the application stores raw data in the ‘asset’ folder, for example, the resources used by a game engine. This includes the audio and video materials, the game logic script files, and the texture resource for the spirits and scenes. Because the Android app package is not encrypted, an attacker can get these resources easily by getting the package from the app store or from another Android device.

Another vulnerable point is weak file access controls for the rooted device and external storage. An attacker can get the application’s private data file via root privilege of the victim’s device, or the application data is written to the external storage such as an SD card. If the private data was not well protected, attackers can get some information such as user account information and passwords from the file.

Finally, the debug information might be visible. If developers forget to comment the relevant debugging code before publishing applications, attackers can retrieve debug output by using Logcat.

Hooking Technique Overview

What is Hooking?

Hooking is a term for a range of code modification techniques that are used to change the behavior of the original code running sequence by inserting instructions into the code segment at runtime (Figure 2 sketches the basic flow of hooking).

 Hook can change the running sequence of the program
Figure 2:Hook can change the running sequence of the program

In this article, two type of hooking techniques are investigated:

  1. Symbol table redirection

Analyzing the symbol table of the dynamic-link library, we can find all relocation addresses of the external calling function Func1(). We then patch each relocation address to the start address of the hooking function Hook_Func1() (see Figure 3).

 The flow of symbol table redirection
Figure 3:The flow of symbol table redirection

  1. Inline redirection

Unlike the symbol table redirection that must modify every relocation address, the inline hooking only overwrites the start bytes of the target function we want to hook (see Figure 4). The inline redirection is more robust than the symbol table hooking because it does one change working at any time. The downside is that if the original function is called at any place in the application, it will then also execute the code in the hooked function. So we must identify the caller carefully in the redirected function.

 The flow of inline redirection
Figure 4:The flow of inline redirection

Implementing Hooking

Since the Android OS is based on the Linux* kernel, many of the studies of Linux apply to Android as well. The examples detailed here are based on Ubuntu* 12.04.5 LTS.

Inline Redirection

The simplest way to create an inline redirection is to insert a JMP instruction at the start address of the function. When the code calls the target function, it will jump to the redirect function immediately. See the example shown in Figure 5.

In the main process, the code runs func1() to process some data, then returns to the main process. The start address of func1() is 0xf7e6c7e0.

 Inline hooking with use the first five bytes of the function to insert JMP instruction
Figure 5: Inline hooking with use the first five bytes of the function to insert JMP instruction

The inline hooking injection process replaces the first five bytes of data in the address with 0xE9 E0 D7 E6 F7. The process creates a jump instruction that executes a jump to the address 0xF7E6D7E0, the entrance of the function called my_func1(). All code calls to func1() will be redirected to my_func1(). The data input to my_func(1) goes through a pre-processing stage then passes the processed data to the func1() to complete the original process. Figure 6 shows the code running sequence after hooking func1(). Figure 7 gives the pseudo C code of func1() after hooking.

 Insert my_func1() in func1()
Figure 6:Usage of hooking: Insert my_func1() in func1()

Using this method, the original code will not be aware of the change of the data processing flow. But more processing code has been appended to the original function func1(). Developers can use this technique to add patches to the function at runtime.

 the pseudo C code of Figure 6
Figure 7:Usage of hooking: the pseudo C code of Figure 6

Symbol Table Redirection

Compared to inline redirection, symbol table redirection is more complicated. The relevant hooking code has to parse the entire symbol table, handle all possible cases, search and replace the relocation function addresses one by one. The symbol table in the DLL (Dynamic Link Library) will be very different, depending on what compiler parameters are used as well as how developers call the external function.

To study all the cases regarding the symbol table, a test project was created that includes two dynamic libraries compiled with different compiler parameters:

  1. The Position Independent Code (PIC) object — libtest_PIC.so
  2. The non-PIC object — libtest_nonPIC.so

Figures 8-11 show code execution flow of the test program, the source code of libtest1()/libtest2() which are exactly the same function except compiled with different compiler parameters, and output of the program.

 Software working flow of the test project
Figure 8:Software working flow of the test project

The function printf() is used for hooking. It is the most used function for printing information to the console. It is defined in stdio.h, and the function code is located in glibc.so.

In the libtest_PIC and libtest_nonPIC libraries, three external function-calling conventions are used:

  1. Direct function call
  2. Indirect function call
    • Local function pointer
    • Global function pointer

 The code of libtest1()
Figure 9:The code of libtest1()

 The code of libtest2(), the same as libtest1()
Figure 10:The code of libtest2(), the same as libtest1()

 The output of the test program
Figure 11:The output of the test program

Study of the Non-PIC Code in libtest_nonPIC.so

A standard DLL object file is composed of multiple sections. Each section has its own role and definition. The .rel.dyn section contains the dynamic relocation table. And the section information of the file can be disassembled by the command objdump –D libtest_nonPIC.so.

In the relocation section .rel.dyn of libtest_nonPIC.so (see Figure 12), there are four places that contain the relocation information of the function printf(). Each entry in the dynamic relocation section includes the following types:

  1. The value in the Offset identifies the location within the object to be adjusted.
  2. The Type field identifies the relocation type. R_386_32 is a relocation that places the absolute 32-bit address of the symbol into the specified memory location. R_386_PC32 is a relocation that places the PC-relative 32-bit address of the symbol into the specified memory location.
  3. The Sym portions refer to the index of the referenced symbol.

The Figure 13 shows the generated assembly code of function libtest1(). The entry addresses of printf() marked with red color are specified in the relocation section .rel.dyn in Figure 12.

 Relocation section information of libtest_nonPIC.so
Figure 12:Relocation section information of libtest_nonPIC.so

 Disassemble code of libtest1(), compiled in non-PIC format
Figure 13:Disassemble code of libtest1(), compiled in non-PIC format

To redirect the printf() to another function called hooked_printf(), the hooking function should write the address of the hooked_printf() to these four offset addresses.

 Working flow of &#039;printf(&quot;libtest1: 1st call to the original printf()\n&quot;);&#039;
Figure 14: Working flow of 'printf("libtest1: 1st call to the original printf()\n");'

 Working flow of &#039;global_printf1(&quot;libtest1: global_printf1()\n&quot;);&#039;
Figure 15:Working flow of 'global_printf1("libtest1: global_printf1()\n");'

 Working flow of &#039;local_printf(&quot;libtest1: local_printf()\n&quot;);&#039;
Figure 16:Working flow of 'local_printf("libtest1: local_printf()\n");'

As shown in Figures 14-16, when the linker loads the dynamic library to memory, it first finds the name of relocated symbol printf, then it writes the real address of the printf to the corresponding addresses (offset 0x4b5, 0x4c2,0x4cf and 0x200c). These corresponding addresses are defined in the relocation section .rel.dyn. After that, the code in libtest1() can jump to the printf() properly.


Go To Part 2 ››

Improve the Security of Android* Applications using Hooking Techniques: Part 2

$
0
0

Download PDF [PDF 1.2 MB]

Contents


Study of the PIC Code in libtest_PIC.so

If the object is compiled in PIC mode, relocation is implemented differently. By observing the sections information of the libtest_PIC.so which is shown in Figure 17, the printf() relocation information is located in two relocation sections: .rel.dyn and .rel.plt. Two new relocation types R_386_GLOB_DAT and R_386_JMP_SLOT are used, and the absolute 32-bit address of the substituted function should be filled in with these offset addresses.

 Relocation section of libtest_PIC.so
Figure 1:Relocation section of libtest_PIC.so

The Figure 18 shows the assembly code of function libtest2() which is compiled in non-PIC mode. The entry addresses of printf() marked with red color are specified in the relocation sections .rel.dyn and .rel.plt in Figure 17.

 Disassemble code of libtest2(), compiled with -PIC parameter
Figure 2:Disassemble code of libtest2(), compiled with -PIC parameter

 Working flow of &#039;printf(&quot;libtest2: 1st call to the original printf()\n&quot;);&#039;
Figure 3:Working flow of 'printf("libtest2: 1st call to the original printf()\n");'

 Working flow of &#039;global_printf2(&quot;libtest2: global_printf2()\n&quot;);&#039;
Figure 4:Working flow of 'global_printf2("libtest2: global_printf2()\n");'

 Working flow of &#039;local_printf(&quot;libtest2: local_printf()\n&quot;);&#039;
Figure 5:Working flow of 'local_printf("libtest2: local_printf()\n");'

From Figures 19-21, it can be seen that when working with the dynamic library generated with the -PIC parameter, the code in libtest2() will jump to the address placed in offset addresses 0x1fe0, 0x2010, and 0x2000, which are the entrances to printf().

Hook Solution

If the hook module wants to intercept the calls to printf() and redirect to another function, it should write the redirected function address to the offset addresses of the symbol ‘printf’ defined in the relocation sections, after the linker loaded the dynamic library into memory.

To replace the call of the printf() function with the call of the redirected hooked_printf() function, as shown in the software flow diagram in Figure 22, a hook function should be implemented between the dlopen() and libtest() calls. The hook function will first get the offset address of symbol printf, which is 0x1fe0 from the relocation section named .rel.dyn. The hook function then writes the absolute address of hooked_printf() function to the offset address. After that, when the code in libtest2() calls into the printf(), it will enter the hooked_printf() instead.

 Example of how the hook function intercepts the call to printf() and reroutes the call to hooked_printf(). The original function calling process is described in Figure 21.
Figure 6:Example of how the hook function intercepts the call to printf() and reroutes the call to hooked_printf(). The original function calling process is described in Figure 21.

To consider all the possible cases previously listed, the entire flow chart of the hook function is shown in Figure 23. And the part of the change in main() function is depicted in Figure 24.

 The flow chart of ELF hook module
Figure 7: The flow chart of ELF hook module

 Code in main() after hooking
Figure 8:Code in main() after hooking

The output of the program is shown in Figure 25, you can see that when the first call to libtest1()/libtest2() executes, the printf() is called inside the functions. When calling the two functions again, after the hook functions are executed, the calls to the printf() are redirected to the hooked_printf() function. The hooked_printf() function will attach the string “is HOOKED” at the end of the normal printed string. Figure 26 shows the program running flow after hooking, compare with the original flow shown in Figure 8, the hooked_printf() has been injected into libtest1() and libtest2().

 Output of the test program, printf() has been hooked
Figure 9:Output of the test program, printf() has been hooked

 The running flow of the test project after hooking
Figure 10:The running flow of the test project after hooking

Case Study – a Hook-Based Protection Scheme in Android

Based on the studies of the hooking technique in the previous sections, we developed a plug-in to help Android application developers improve the security of their applications. Developers need to add only one Android native library to their projects and add one line of Java code to load this native library at start-up time. Then this library injects some protection code to other third-party libraries in the application. The protection code will aid encrypting the local file's input/output stream, as well as bypass the function __android_log_print() to avoid some user privacy leakage by printing debugging information through Logcat.

To verify the effectiveness of the protection plug-in, we wrote an Android application to simulate a scene of an application that contains a third-party library. In the test application, the third party library does two things:

  1. When an external Java instruction calls the functions in the library, it will print some information by calling __android_log_print().
  2. In the library, the code creates a file (/sdcard/data.dat) to save data in local storage without encryption, then reads it back and prints it on the screen. This action is to simulate the application trying to save some sensitive information in the local file system.

Figures 27-30 compare the screenshots of the test program, output of Logcat, and the content of the saving file in the device’s local file system before and after hooking.

 The Android* platform is Teclast X89HD, Android 4.2.2
Figure 11:The Android* platform is Teclast X89HD, Android 4.2.2

 App output - no change after hooking
Figure 12:App output - no change after hooking

 Logcat output - empty after hooking
Figure 13:Logcat output - empty after hooking

 Local file ‘data.dat’ at /sdcard has been encrypted after hooking
Figure 14:Local file ‘data.dat’ at /sdcard has been encrypted after hooking

As the figures show, the running flow of the program after hooking is the same as the one without hooking. However, the Logcat cannot catch any output from the native library after hooking. Further, the content of the local file is no longer stored in a plain text format.

The plug-in helps the test application improve security against malicious attacks on collecting information via Logcat, as well as offline attacks to the local file system.

Conclusion

The hooking technique can be used in many development scenarios, providing seamless security protection to Android applications. Hook-based protection schemes can not only be used on Android, but also can be expanded to other operating systems such as Windows*, Embedded Linux, or other operating systems designed for Internet of Things (IoT) devices. It can significantly reduce the development cycle as well as maintenance costs. Developers can develop their own hook-based security scheme or use the professional third-party security solutions available on the market.

References

Redirecting functions in shared ELF libraries
Apriorit Inc, Anthony Shoumikhin, 25 Jul 2013
http://www.codeproject.com/Articles/70302/Redirecting-functions-in-shared-ELF-libraries

x86 API Hooking Demystified
Jurriaan Bremer
http://jbremer.org/x86-api-hooking-demystified/

Android developer guide
http://developer.android.com/index.html

Android Open Source Project
https://source.android.com/

About the Author

Jianjun Gu is a senior application engineer in the Intel Software and Solutions Group (SSG), Developer Relations Division, Mobile Enterprise Enabling team. He focuses on the security and manageability of enterprise application.

Intel® VTune™ Amplifier XE 2016 Update 3 Fixes List

$
0
0

NOTE: Defects and feature requests described below represent specific issues with specific test cases. It is difficult to succinctly describe an issue and how it impacted the specific test case. Some of the issues listed may impact multiple architectures, operating systems, and/or languages. If you have any questions about the issues discussed in this report, please post on the user forums or submit an issue to Intel® Premier Support.

<Update 2 

DPD200254200Better visualization for bandwidth data
DPD200363058BSOD on Windows* 7 using VTune Amplifier 2015 Update 1
DPD200374547Crash in VTune Amplifier #version 2015 update 3
DPD200381055VTune Amplifier crashing machine if collection ends before program
DPD200381096VTune Amplifier cause BSOD while running
DPD200408498VTune Amplifier assert failure when attempting to view analysis
DPD200408522"Collection failed. The data cannot be displayed" message if special characters in path to application
DPD200409392VTune Amplifier crashes after user-mode collection
DPD200575463VTune Amplifier crash report in libamplxe_dbinterface_sqlite_1.99.so
DPD200577525VTune Amplifier does not start due to licensing issue - PerfAnl: Cannot connect to license server system.

 

Android: The Road to JIT/AOT Hybrid Compilation-Based Application User Experience

$
0
0

Download PDF[PDF 407KB]

Introduction

As the Android Open Source Project (AOSP) evolves, we often want to fully understand how design choices will affect User Experience (UX). This article looks at compilation paradigm shifts: how an Android application gets transformed into binary executable code in the next Android release. The primary audience is any Android developer who wants to understand how the evolution of AOSP will impact UX. Product manufacturers also must understand how the Android ecosystem is evolving to ensure that their users get the most performance at first boot, first launch for often-used applications and during system updates.

Lollipop (5.0) Android release introduced a new Virtual Machine (VM) called the Android Run Time (ART). ART replaced Kitkat’s (4.4) Dalvik VM and was deemed to provide a faster, smoother, and more powerful computing experience. With the Marshmallow (6.0) Android release, ART followed up with specific VM tuning to improve UX. These consecutive transformations in the way applications are executed on Android were primarily driven by higher performance expectations.

Lollipop included a new method-based compilation technology at application install time. Next, Marshmallow introduced memory management and Garbage Collection (GC) improvements with the intent of enhancing performance and battery life. In both Lollipop and Marshmallow, ahead-of-time (AOT) compilation forces all methods in an Android application package file (.apk) to be compiled during application installation.

There are several shortcomings to this compilation approach, including noticeably longer application install and compile times. These are critical metrics because end users notice the impact on every application install or over-the-air (OTA) software update, and OEMs rely on fast first boot time during product validation.

In both Lollipop and Marshmallow, AOT installation forces all methods inside an application to be compiled with the same optimization level, with an exception of large methods which might be interpreted, depending on device storage limitations. AOT compiled application binaries also consume significant storage space; as a result, low storage devices sometimes leave applications interpreted. In reality, each user is different and his or her interaction with an Android application is potentially unique. Typical users interact with some applications more than others. Furthermore, some features of a particular application are more commonly used than others.

The rest of this article shows how the AOSP master branch fixes major Marshmallow shortcomings and takes a first step towards obtaining the best performance possible without the shortcomings of long installation, update and boot times and with the benefits of reduced storage space and memory footprint. The next sections will explain how the AOSP master also generates better native code for user applications, and its impact on user perception of application install and launch times, RAM usage, binary size and overall Java performance.

Application UX: Dynamic compilation, background compilation using profiles

This section presents the new application compilation workflow. Images are often worth a thousand words, so this section is built around a discussion of the following figure:

Based on recent development in AOSP master branch (as shown in the N developer preview), the upcoming Android version is expected to no longer force applications to be compiled at install time. The default is to not do any compilation, so that applications install much faster. In order to maintain the same level of performance, application compilation becomes a hybrid mechanism involving Just-In-Time (JIT) compilation while the application is running, and background compilation occurring when the device is plugged in and stays idle for long time. These are currently not the default settings in the AOSP master, however JIT enabled by default as part of N developer preview. In AOSP master, JIT and background settings should be enabled explicitly via Android system settings.

At first launch of an application, none of its methods have been compiled. Without AOT compiled code, an interpreter initially runs all the bytecode. This approach improves application installation time, but has negative impact on application startup and runtime performance. For each method, the interpreter starts out counting the number of times a particular method is being entered, the number of loop iterations executed within it and the number of virtual calls it makes. A particular method is considered often-executed, also called warm, when the total of these counts exceeds a threshold. Once warm, more information is gathered about control flow and the actual targets of virtual calls. Once the total of the counts exceeds yet another higher threshold, the warm method becomes hot and is compiled by the JIT compiler into native executable code, which is stored in the JIT code cache along with the collected profile information. The next time the hot method is invoked, native code is executed instead of the interpreter.

One major development seen in the AOSP master based JIT compiler is that ART will record which methods are hot and save their names for later AOT recompilation. When the device is unused (and charging) for over long duration, a service will compile the hot methods and save the generated code. When the user launches the application again, the compiled code will be loaded directly into memory and executed. There will be no need to interpret or JIT-compile these hot methods. For this reason, the compiled code can be seen as divided into two parts: AOT loaded methods and JIT’ed methods. After a few days, if a user plays with all the major features of a given application, the often-used code corresponding to those features will all be compiled and the performance should be optimal.

In order to remember which methods are hot, Android now contains a mechanism known as the profile saver. It polls the code cache periodically to obtain the list of hot methods and writes them into a file for later use by the background compiler (Based on N developer preview).

A major advantage to the hybrid approach is that the dynamic (JIT) compiler and the background (AOT) compiler need not be identical or perform the same set of optimizations. That means that the JIT compiler can be a quicker compiler in terms of compile time whereas the background compiler can have a more extensive optimizer. By differentiating the two compilers, the AOSP master branch provides a door to classic big optimizations that have existed in mainstream compilers for decades. On the flip side, if the device does not have any down-time, then the background compiler cannot do its job. In modern day common phone and tablet use, this is an unlikely event.

First Boot, App Install and Launch Time, RAM and Storage, App Performance

Minimizing the time between when a user performs an action to when the system responds is critical to the best user experience. First boot of the device, waiting for an application to install or launch, and application runtime performance are some of the most critical user perception metrics. Typical users are also concerned with system update time, application memory usage and storage space limitations.

Due to AOT optimizing compilation, device first boot in both Android Lollipop and Marshmallow versions takes significantly longer compared to previous versions of Android. In the upcoming version of Android, first boot should be faster since the system relies on JIT compilation to provide good performance. Application and system update times should also improve. As a result of compiling only methods associated with often-used application features, users can expect greatly reduced application binary size, which saves storage space.

In Lollipop and Marshmallow, application installation takes a noticeably long time due to AOT compilation of the entire application. The larger the application, the worse the problem. In AOSP master with the JIT enabled, the system relies on compiling methods at runtime. This significantly reduces compilation time and RAM footprint, which is important for low memory devices. Application startup, however, is a bit slower, but the AOSP master branch contains a fast interpreter, which helps alleviate the problem.

There is a downside to the shift to the hybrid JIT/AOT compilation model (shown in N preview). As a result of first using an interpreter and having to wait for JIT compilation to finish, some applications may experience sluggishness compared to Marshmallow until the code is compiled. However, the application is expected to recover performance as the interpreter calls the JIT optimizing compiler on commonly used methods. Finally, as the previous section stated, hot code will be interpreted only until it is JIT or AOT compiled. After background compilation, the interpreter is no longer used at all since the previously hot code will have be compiled for the next launch.

Conclusion

AOSP master brings dynamic compilation back to the next generation of Android by re-introducing a Just-In-Time (JIT) compiler. This is a necessary evolution compared to Marshmallow in order to address excessive application install and first boot time, and memory and disk space consumption. The AOSP master based JIT is not the same as the one used in Android previously, which was phased out in the Lollipop release. It has a larger optimization scope (method based JIT vs. trace based JIT) and has a much more complex infrastructure. Current the AOSP master with JIT enabled compilation infrastructure is able to retain hot method profiles and use them to background recompile hot methods when the phone is idle for a long time and is charging (should be pretty close to behavior in N preview). Generated code performance is thus improved based on a particular user’s application use the next time the application is launched.

This switch to an interpreter plus hybrid JIT/AOT compilation system in AOSP master should lead to a much better user experience with far shorter first boot, install and over-the-air update times with the additional benefits of reduced RAM and storage usage. The interpreter and JIT compiler combination provides a good application launch experience while background compilation should deliver excellent performance after a few days of use. The two elements together should bring the performance of the AOSP master branch to the same or better level as Marshmallow.

Acknowledgements (alphabetical)

Dong-Yuan Chen, Chris Elford, Chao-Ying Fu, Aleksey Ignatenko, Serguei Katkov, Razvan Lupusoru, Mark Mendell, Dmitry Petrochenko, Desikan Saravanan, Nikolay Serdjuk

About the Authors

Rahul Kandu is a software engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). He focuses on Android performance and finds optimization opportunities to help Intel's performance in the Android ecosystem.

Jean Christophe Beyler is a software engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). He focuses on the Android compiler and ecosystem but also delves into other performance-related and compiler technologies.

Paul Hohensee is a principal engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). He focuses on the runtime and library aspects for Java virtual machines, and is helping change the way we measure Java performance to be application and UX oriented.

Optimize Your Unity* Games with Intel® Graphics Performance Analyzers

$
0
0

Today developers rarely create their own game engines, instead, they opt to use world class engines like Unity* to quickly develop quality titles that are easily deployable to a variety of different platforms.   Although the development paradigm has changed, games still need to be highly performant and give their players the best experience on the widest range of hardware. 

This is where the Intel® Graphics Performance Analyzers (Intel® GPA) can help!  Below are some resources that can help you optimize your Unity title using tools like Intel GPA. 

Unity Optimization Guide for Intel x86 Platforms

This guide consists of a great tools overview of Intel GPA in relation to Unity.  The guide not only focuses on the tools aspect but optimization in general when approaching a Unity application.   The guide covers optimizations like Occlusion culling, Batching, Render ordering, Texture compression, and much more.

I highly recommend reading this guide when sitting down to optimize your next Unity title for any x86 based platform – that includes both Android and Windows games.  You can find the 4-part Optimization guide here.

Unity Optimization Video Tutorial Series

Cristiano Ferreira from the game development enabling group has created a few great videos about Unity optimization.   These are short little tips about things to look out for when developing your Unity title. 

Disable Fully Transparent Geometry

Render Queue Ordering

Unity Performance Sandbox Sample

Not only do we have some great learning resources available, we also have a sample Unity scene that helps visualize some of these optimizations you can make.  You can download the full source code from our GitHub.

If you have any questions, comments, requests, please post them on our Intel® GPA forums.  If you are interested in learning more about performance optimization or Intel GPA, you can find more information on the Intel® GPA website.

Q&A with Android Developer Paul Blundell

$
0
0

Gregory Menvielle
President at SmartNotify

In this post, I interviewed developer Paul Blundell to gain some insight on his experience with Android and discuss how he manages his apps.

Gregory: How did you get started with Android. Why did you go with this platform rather than Ios or even hybrid?

Paul: The concept of hybrid did not exist when I started, this was back in roughly 2009, I remember at the time there was a big barrier to doing iOS apps in that I needed a mac and didn't have one.

I had heard rumors of Android in 2008 when I was at my first job doing Nokia J2ME apps and the allure of a mobile operating system that had a permissions system that allowed apps to have more freedom but also be more secure was a real treat.

I experimented a bit in my own time but it didn't go anywhere. Then in 2009 when I was working on the UK National Rail website they also wanted an Android app and so I was selected to do it. Sometimes being in the right place at the right time helps.

Gregory: Can we talk about your work with Udacity, given the numerous lines of code you review all the time, what would you say is the most common mistake developers make as they learn the new environment?

Paul: That’s a really good question. One of the most common mistakes I see a lot when reviewing code is misuse of the static keyword. This usually happens in two places:

  1. First an instance field is made static when it should not be, this means the reference to that field is longer lived than the class it is in and can lead to memory leaks which are never good but are even more important to avoid on a mobile constrained environment. An example of this would be in an activity having a static reference to one of your Views.
  2. Then the other not so obvious 'mistake' is not making classes static when they should be. Inner classes, for example, you should always consider making them static so that they don't hold a reference to the outer class, which also leads to memory leaks. Most people when starting out don't realize this and it’s an easy mistake to make, but if you default to always making your inner classes static it will help you and your apps in the long term. An example here would be a class extending ASyncTask inside of an Activity.

Gregory: Where do you see Android fitting in the IoT world?

Paul: Android itself fits in in the app world and to configure IoT devices and check up on them its always nice to use an app. So this is the sweet spot for Android.

Secondly Google is bringing out Brillo and Weave which is a protocol and a platform to build IoT devices, so they have a smaller coding footprint than Android for embedded devices. However, the interesting thing is that Google has SDK's for Brillo devices via Weave to talk to Android devices, and therefore this makes connecting and configuration like I first talked about really easy.

Gregory: Some of your apps have been downloaded millions of time. At which point did you feel that you had to do more customer support vs. app development (i.e responding to requests, feedback...)

Paul: I never feel like this :-) being organized is really important in the software industry. Keeping a list of tasks that you wish to do and time-boxing areas of work helps you keep on top of everything. This includes making time for customer support, but also balancing that with the amount of development work you wish to do. If I had one piece of advise for people around this it would be this.

Have a master to-do list that encompasses life goals and major events. Each morning decide your work for the day by reading these and figuring out what is the smallest next step you need to do to accomplish this, and write this down for a daily to-do list.

For example:

  • [ ]Life goal "Write my zombie apocalypse game"
  • [ ] Todays task "Write main menu with new game option and quit option"

Combine this with using a calendar/diary to schedule meeting other peoples time constraints and you should always find time to do everything you want.

Gregory: While the issue has moved from legal to tech at the moment, what was your take on the DOJ vs Apple case in the US?

Paul: I sided with apple, whilst yes the FBI needs the information. Apple cannot create software backdoors because, yes, it lets the FBI in but then it easily lets anyone else in with more evil intentions.

I don't know why the FBI took it to court because like we have found out now someone has managed to hack into it anyway (which I always thought was possible) and whilst still bad, its less bad than making a precedence of publicly accessible non secure software.

Gregory: What would be your recommendations for people who need to produce an app that will be working in remote locations with little connectivity?

Paul: Consider offline mode from the start. If you have a server side component use http caching, etags and gzip content. This then allows the phone to still produce content (if it has been seen before when offline).

You can also have more aggressive prefetching algorithms, meaning when the app gets a network connection, it assumes the pages the user is going to visit and pulls down that data when it can for later consumption. The inverse of this is that usually when people are in an environment with little connectivity they are also on small data plans and therefore don't want to be downloading too much.

You can use something like Googles JobScheduler which allows you to control when your tasks are run, i.e. when the user is on a WiFi network to save data or is plugged in to a charger to save battery.

Gregory: How do you find the time to publish so much on GitHub !?

Paul: I think going back to my last answer, it’s all about organization. I like to think to myself I will spend two hours on Sunday just working on some nice open source idea I've had. These two-hour slots quickly add up. I also make the most of travelling if I am getting a train or flying somewhere I use this time like a hackathon and set myself a task that I want to complete. The trick is to always have my laptop and a charger in my backpack, the great thing about Git is that I don't need an internet connection to save my work.

Gregory: Can you tell us about the one app you really love on your phone?

Paul: I really like BUX. This is share-dealing application but the good thing is that you can play for fun. It really gamifies share-dealing and you can add your friends and create tournaments. The UX of the app is quite interesting, I've found it intuitive from the start but it doesn't follow many of the Android design guidelines (although these guidelines change all the time) and I'd argue the BUX app understands them and has great reasons for breaking them.

Gregory: And the one app you removed the most recently? Why?

Paul: Ooh I don't really like shaming any apps. However, since you asked, I just uninstalled Famous, this is an app where you become someone on Twitter’s biggest fan. I was in the beta so I'm not sure if it's been generally released yet, but yeah the fun was that you could make 'hearts' by being someone’s biggest fan over time, and other people could out bid you and steal them from you.

Whilst I found it fun for a bit, after a while it didn't keep my interest as it didn't go to any next level. The UX of this app was also interesting in that you can swipe any direction on the screen and navigate to another activity. This was more of a steep learning curve and I'm not convinced its the best way to do it. Uninstalled now! Sorry!

To learn more about Paul, check out his posts on the Android Hub and view his personal site here:http://blog.blundellapps.co.uk/

The Java* Application Component Workload for Android*: Real Java* Application Use Cases for Android*

$
0
0

Download PDF [PDF 902 KB]

Introduction

The Android system has roughly a billion users, with almost a million applications from the Play Store downloaded and used daily. Android users equate their User Experience (UX) with their application (app) experience. Measuring UX via running Play Store apps is subjective, non-repeatable and difficult to analyze. Further, most magazine benchmarks don’t stress critical paths representing real application behavior; optimizing for these benchmarks neither improves UX nor delights app users. Many Android benchmarks that have been used since Cupcake (Android 1.0) have become obsolete due to improvements in Android compilers and runtimes.

This paper discusses the Intel-developed Java* Application Component Workload (ACW) for Android*. ACW was developed to bridge the gap between magazine benchmarks and more sophisticated workloads that model real app behavior, as well as to provide a guide to robust optimizations that perceivably improve UX. The workload has been analyzed to help app developers write optimal Java code for Android (for more information, see https://software.intel.com/en-us/articles/how-to-optimize-java-code-in-android-marshmallow). OEMs, customers and system engineers can use ACW to compare Android software and System-on-a-Chip (SoC) capabilities

Android Component Workload (ACW) Overview

The workload consists of a set of computational kernels from often-used applications. Kernels are groups of tests in the areas of gaming, artificial intelligence, security, parsing HTML, PDF document parsing and encryption, image processing and compression/decompression. ACW stresses the Android Java Runtime (ART) compiler and runtime, measuring the impact of compiled code and its runtime overhead while executing the application program. VM engineers and performance analysts can use the workload to explore ways to improve ART code generation, object allocation and runtime optimization, as well as suggest ways to improve the micro-architecture of upcoming SoCs.

ACW includes a set of tests designed to measure the difference between 32- and 64-bit code, but the primary focus is on UX. ACW can be run from both a Graphical User Interface (GUI) and the command line (using adb shell). The workload can mix and match which kernels are run and measured. The use cases are of fixed duration and they report a throughput-based score. The final score (operations/second) is the geometric mean of the throughput scores of each kernel.

How to run ACW in the Android mobile environment (user mode)

On the Android platform, ACW is provided as an Android application package (apk). After installation, clicking on the JACWfA icon (see icon in Figure 1) launches the workload and displays a UI with the Start option at the center of the screen. Additional navigation is available at the bottom of the UI using two tabs: TEST and RESULTS. By default, the TEST tab will be displayed. Click the Start button to run the workload using the default profile settings. By default, all tests (tests are the kernels described above) will be selected, but the user may deselect any particular test or tests. In the top right corner, there is a Settings icon which allows users to configure Threads, Suite and Accuracy for each run.

 Main / Configure activities)
Figure 1. ACW UI (from left to right: Main / Configure activities)

 Progress / Results activities)
Figure 2. ACW UI (from left to right: Progress / Results activities)

The workload can also be run as an Android application from the command line via adb. For example:

adb shell am start -S –n com.intel.mttest.android/com.intel.mttest.android.StarterActivity -e autostart true -e -s masterset:all

The Android version of the workload is designed to mimic the characteristics exhibited by real applications. Most Java applications in Android are multi-threaded, so the workload has support for as many threads as there are available cores on the device, though there is no direct communication between these threads.

ACW has three pre-configured test set options: Java, 64scale and All. Java mode runs only tests which closely mimic real application behavior. 64scale mode runs computational tests for the purpose of comparing 32- and 64-bit execution. All mode runs every test in the workload. Additionally, one can select how long tests should run via the Accuracy modes. They recognize that the longer a test runs, the more accurate will be the result. The four predefined Accuracy modes are: very-precisely (longest run), precisely, default and fast (shortest run). The default Accuracy is a balance between test result stability and time to run, and should take 20-40 minutes to complete on the default test set configuration. To modify accuracy settings the corresponding XML configuration files should be updated. When the configuration is complete, the user can return to the home UI by clicking on the icon on the top left corner or by using the back option at the bottom of the screen.

Click the START button to run the workload (Figure 3). The progress wheel (as on Figure 2) shows the run status.

Figure 3. ACW progress wheel
Figure 3. ACW progress wheel

The UI displays which tests have completed, the one currently in progress and which are yet to run. The final score is displayed at the top of the RESULTS tab. Additionally, individual test and subtest scores are displayed on the screen and are available from logcat messages. Note that ACW can be run in a Developer mode which allows even easier customization and debuggability. More information on Developer mode can found on the web at How to run ACW in Developer mode.

ACW Tests (Kernels)

ACW includes over 80 tests grouped into kernels (see the table below) associated with different application areas. Every kernel includes a number of tests that implement realistic Android application scenarios using standard Java libraries. Some tests, such as MATH and SORT, implement well-known algorithms.

Kernel

Library

Description

Artificial Intelligence (AI)

libGdx AI

Artificial intelligence

Compression

XZ, Apache Commons

Compression

Dmath

 

Decimal integer math algorithms

Fmath

 

Floating point math algorithms

Html

jsoup

Html parser

Image

BoofCV

Image processing

Jni

 

JNI stress

Lphysics

jBullet

Physics engine

Pdf

Pdfbox

Pdf parsing and encryption

Sort

 

Sort algorithms

Xphysics

jBox2D

Physics engine 

ACW produces scores for individual kernels and their subtests and an overall performance score. The overall score is measured in operations per second (ops/sec). Here are the formulas to calculate scores:

Test Total = sum of executed Test iterations. *
Single Test score = Test Total / Time spent for execution.
Kernel score = GEOMEAN of the component tests scores.
Overall score = GEOMEAN of all kernel scores.
* Defined by test implementation

The Design of ACW for Android

ACW uses the open source MTTest modular framework, which was specially developed for this workload. It can be configured to run under both ART and the Java Virtual Machine (JVM) on Android as well as Linux- and Windows-based PC environments using both 32- and 64-bit operating systems.

MTTest has 4 major modules:

  • Configuration (configures workload)
  • Runner (tests run functionality)
  • Summary (collects test run results)
  • Reporter (reports results in a specific way)

The framework is designed to be easily extended for new performance testing cases. Runner, Summary and Reporter modules can be easily updated to support new functionality, such as reporting results in XML file format.

Figure 4. ACW architecture diagram
Figure 4. ACW architecture diagram

Standard workflow shown in the architecture diagram (Figure 4) is:

  • Read the workload configuration and test set from XML configuration files
  • Run test cases (run mode is defined by Run Model)
    • Perform test specific initialization
    • Run test (Three phases)
      • Ramp-Up
      • Measurement
      • Ramp-Down
  • Summary module collects results
  • Deliver results with a specific Reporter (Activity Reporter) for Android application

Test Ramp-Up is used to allow JVMs with Just-In-Time (JIT) compilers to compile hot code so that we can measure pure JVM performance. That is, we wait long enough to trigger JIT compiles so the main test is run using compiled code, not the interpreter. Ramp-up and ramp-down phases are also needed for multi-threaded runs in order to ensure that all test threads are running while some of them are starting to ramp-up or others are finishing measurement. The Measurement phase is used to compute results based on test duration and number of operations executed.

In all tests, iteration() executes in a loop until the time spent reaches the configured value of stageDuration, which limits test run time (iteration time):

long startTime = System.nanoTime();
do {
    count++;
    score += test.iteration();
} while(System.nanoTime() < startTime + stageDuration);
elapsedTime = System.nanoTime() - startTime;
summary.collect(score, elapsedTime , count);

Ramp-up/down times are defined by two types of configuration files. The first is the Test configuration file (<ACW_DIR>/assets/testsets/masterset.xml) which includes a list of XML files which describe the tests to be run and their parameters. In the example below, pdf.xml describes the Pdf kernel, subtest names, number of repetitions and input file names.

<?xml version="1.0"?><mttest version="0.1"><conf name="timeUnits" value="second" /><workload name="com.intel.JACW.pdf.Encryption"><option name="repeats" value="1" /><option name="goldenFileName" value="apache.pdf" /></workload><workload name="com.intel.JACW.pdf.Parser"><option name="repeats" value="1" /><option name="goldenFileName" value="apache.pdf" /></workload></mttest>

With a default test set configuration, the workload runs certain tests several times and compares the result against different type/size golden files (<ACW dir>/assets/goldens). This is done to emulate system load variation as on real end user systems. As a result, the default number of test runs is more than the number of performance tests.

The second type of XML file is a setup configuration file (<ACW dir>/assets/configs) that limits test run times. There are four predefined XML configuration files: short, medium, long and very_long. By default, medium.xml is used to limit workload run time up to six seconds, ramp-up to two seconds and ramp-down to one second:

<?xml version="1.0"?><mttest><name value="default" /><conf name="rampUp" value="2000" /><conf name="duration" value="6000" /><conf name="rampDown" value="1000" /><conf name="isValidating" value="false"/></mttest>

ACW for Android Performance Overview

ACW runs multiple threads (up to the number of CPU cores) as part of its default settings. Real Android applications in Java are often multi-threaded, although interaction between multiple Java threads is a new area of focus among Android performance analysts and is not present in ACW. From a performance investigation standpoint, we have focused on the single-threaded case, since app threads typically do not interact except during synchronization within the ART framework libraries.

Performance analysis is motivated towards guiding VM engineers to identify optimizations for Android UX and understand the SoC limitations that impact ART performance. ACW’s ART use-cases that mimic real Java application behavior (HTML, Pdf, Lphysics, Ai, Image and Compression) spend most of their execution time in ART compiler generated code for the application, ART framework system library code, and the ART runtime. ART runtime overhead is commonly associated with object allocation, array bound check elimination, class hierarchy checks and synchronization (locks). These applications use typical ART framework library (libcore) routines from java.lang.String, StringBuffer, Character, java.util.ArrayList, Arrays, and Random. A small amount of time is spent in native String allocation.

While delving into SoC characterization, we have seen Instruction Translation Lookaside Buffer (ITLB) cycle miss costs of 8-14% (lost CPU time), and 3-5% Data Translation Lookaside Buffer (DTLB) cycle miss costs. Performance is often restricted by instruction cache size limitations on devices with smaller instruction caches and is processor front-end (instruction parsing and functional unit distribution) bound.

ACW Optimization Opportunities in ART

ACW opens the door for Profile Guided Optimizations (java.lang.String and java.util.math, and native inlining for methods that call into ART’s native String implementation.

Open Source ACW for Android

ACW has been open sourced at https://github.com/android-workloads/JACWfA/ as part of Intel’s contribution to improving the way User Experience performance is measured on Android. After optimizing away several synthetic benchmarks such as CF-Bench and Quadrant, Intel’s team took a step forward by using Icy Rocks* (see the paper for details https://software.intel.com/en-us/android/articles/icy-rocks-workload-a-real-workload-for-the-android-platform) to represent a physics gaming workload. ACW is a further step forward in that it represents a wider set of application use-cases in the form of an Android workload. Intel’s objective is to drive cross-platform ART compiler and runtime performance improvements on the latest and greatest versions of Android in order to improve user experience.

Downloads

The source code can be downloaded at https://github.com/android-workloads/JACWfA/. Release 1.1 of the apk is available in the /bin folder.

Conclusion

Java* ACW for Android is a Java workload designed to stress real application components that in turn influence UX on Android Java-based applications. It is intended to help app developers write better apps, and Android VM engineers to better optimize existing libraries and Java runtime performance. Both contribute to better Android application performance and user experience. OEMs and customers are also encouraged to use this workload to gauge the Android software stack and CPU capabilities on mobile devices. Intel is using ACW to improve product performance and user experience by identifying optimization opportunities in ART. We hope it becomes an important indicator of performance and user experience on Android.

About the authors

Aleksey Ignatenko is a Sr. Software Engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). He focuses on Android workload development, optimizations in the Java runtimes and evolution of Android ecosystem.

Rahul Kandu is a Software Engineer in the Intel Software and Solutions Group (SSG), Systems Technologies & Optimizations (STO), Client Software Optimization (CSO). He focuses on Android performance and finds optimization opportunities to help Intel's performance in the Android ecosystem.

Acknowledgements (Alphabetical)

Dong-Yuan Chen, Chris Elford, Paul Hohensee, Serguei Katkov, Anil Kumar, Mark Mendell, Evgene Petrenko, Dmitry Petrochenko, Yevgeny Rouban, Desikan Saravanan, Dmitrii Silin, Vyacheslav Shakin, Kumar Shiv


Outfit7 Celebrates Success on Their Own Developer's Journey

$
0
0

Download Document

Lessons From The Road Less Traveled

The goal of every good app developer is the same: take a great idea, build a clever prototype, press on through multiple challenges, and, finally, land investors to bring the idea to market. Fame and fortune naturally should follow, plus the opportunity to build additional apps. Lather, rinse, repeat.

However, rules for mobile apps and games are changing. Consider the case of Outfit7, creator of the Talking Tom and Friends franchise, which is based on the simple talk-back concept. In late 2015, Outfit7 crossed the incredible three-billion-downloads mark, after just six years in the market. That immediate success makes their journey unique among those developers usually take.

Perhaps more significantly, Outfit7 rejected early 'angel' investors and used the founders' own money, banking on phenomenal income growth to fuel the company's rise. In addition, the team isn’t dreaming about following the usual development path to become the next Microsoft*, or the next Electronic Arts*. They want to be the next Disney*.

Their technology story recently took another interesting twist, as they switched to the Unity* engine. The primary goal was to take advantage of multi-platform support that now includes the Universal Windows Platform (UWP). The move to Unity should open up better access to the Intel® architecture, and the one billion Windows® 10 devices expected to be sold in the next three years, furthering Outfit7’s unprecedented growth.

Humble, But Far From Typical, Beginnings

Headquartered in Limassol, Cyprus, Outfit7 has key subsidiaries in the United Kingdom, Slovenia, and China. The company was started (in 2009) by eight friends–mostly engineers–who had previously worked on web technologies. The early plan behind Outfit7 was to take what the team knew about performance and algorithms, and make something entertaining and fun.

Their key visionary was Samo Login, who had served as the founding Chief Technology Officer at Najdi.si, a popular search-engine portal in Slovenia. He invested $270,000 of his own money, and pushed his team to quickly develop multiple apps. They discarded the weakest entries, and, in July 2010, launched Talking Tom Cat as an iOS* app.

The Talking Tom Cat app has grown rapidly, and now shares the stage with numerous other 'talking friends', including dogs, parrots, and more cats. In the original, players poke, prod and laugh out loud to Talking Tom Cat's kooky reactions and share fun talk-back videos with friends. The experience and gameplay continues to evolve with each app launch.

With the subsequent highly successful and award-winning releases My Talking Tom and My Talking Angela, gameplay involves growing, and caring for, your character. Players start with a cuddly little baby that requires food and attention (and bathroom breaks) in order to grow and thrive. Players can quickly start earning credits by watching advertisements and marketing videos, or by playing simple games and racking up high scores. The credits help you shop for clothing upgrades, food, home-remodeling options, and more. There are voice options as well. The app is free, with revenues based on game advertising.

Players have formed real and immediate emotional attachments to their electronic pets, and the global reaction keeps growing. Couples have even been married using the voice feature, which renders human voice inputs in the high-pitched squeaky voice iconic to Outfit7’s creation.

Talking Tom’s Jetski added a runner element to the Talking Tom and Friends world. And Talking Tom Bubble Shooter further extends the app franchise’s gameplay with immersive bubble-shooter gameplay, allowing fans to play solo, or go head-to-head, in real-time.

Today, the brand as a whole has permeated pop culture. Talking Tom and Friends visited the White House for the 2013 Easter Egg Roll, and there are now dedicated YouTube* channels, and an animated web series. A full-length animated movie could be next.

As with most startups beginning the developer's journey, Outfit7's engineers had a good idea about the hardware and software resources they would need in order to grow quickly. They chose the Google* App Engine for their cloud technology, and developed their game with multi-platform reach in mind from the beginning. But support for Intel devices was limited in the early days.

Marko Štamcar, one of the co-founders at Outfit7, is currently the Senior Software Engineer, and release manager. He explained that the multi-platform approach was a key driver in switching to the Unity engine for future development. "We always want to support as many platforms and systems as we can, and we start every project with that in mind. We must have support in the apps to connect via Facebook*, for example. People from different platforms must be able to see each other in our apps as well."

For the first couple of years, Outfit7 used their own technology in a proprietary development engine to create their apps for Apple* and Android*, the two leading platforms. They only supported the ARM architecture directly, but that was enough. Fortunately, to reach non-ARM devices, many non-ARM devices provided ARM translation in their operating systems–but there were issues. "We started noticing that Intel devices displayed some incompatibility issues with our apps,"Štamcar said. At that time, the Unity engine wasn’t an answer, because it did not support Intel either. "We had to wait until Unity supported Intel architecture for mobile apps in a stable manner, and that happened a few months ago, in early 2016. Since then, we have released My Talking Tom and My Talking Angela for Intel devices."

Intel and Windows® 10 Expand the App Marketplace

Štamcar and his team were smart to concentrate at an early stage on the two most popular platforms. According to Business of Apps, a leading tracking site, there were 25 billion iOS app downloads in 2015, and over 50 billion Android downloads. Total app revenues across all platforms are projected to grow from $45 billion in 2015, to over $76 billion in 2017.

With market saturation always an issue, future growth should get a boost from the emergence of Windows 10. The latest Microsoft operating system is officially on the fastest adoption trajectory of any version of Windows–it’s now running across 200 million PCs, tablets and phones. In the next three years, an estimated one billion newly sold devices could join those ranks, so any developer, no matter where they are in their journey, would be wise to jump on the bandwagon.

Štamcar has. "We support all the Windows-based, touchscreen-based tablets there are, and all the phones they have," he said. "We like mobile devices and touchscreen-based devices, because all our apps have lots of interaction with touches. We don't invest in developing two different user experiences, so mouse and keyboards for us are not an interesting application."

Other than that, however, Štamcar is interested in any platform that performs. "Apple and the Android are not the only players in the mobile systems anymore," he said.

When the team first started, they actually used different engines for different platforms. "There were some similarities, but there were many differences as well. The only common things in the engines were the UI behavior and the game logic."

Two engines may not seem so bad, but consider the sheer number of devices they have to support. "Some game engines are so complex that they won't perform well on all devices, especially on Android,"Štamcar said. "I think My Talking Tom supports around 10,000 devices. That is a lot of devices to run perfectly on–and to get the quality we want, we have a huge quality-assurance department."

As Outfit7 evolved, the team was interested in porting over to a single development engine that would output compiled code for multiple platforms. There were three leading contenders:

  1. Unity, with about 47% of the market
  2. Unreal*, with an approximate 17% market share
  3. Any one of around 450 other game development engines…

Like so many other developers, Outfit7 chose Unity.

For Outfit7, there were several reasons why switching to Unity was a smart move. First, settling on a single engine cut down overhead when managing multiple code-bases and testing devices. Second, the team was able to tap into a healthy local market for developers who understood Unity. And third, it opened up the Intel Architecture.

"The level of complexity we have in our apps now is impressive,"Štamcar says. "And the mobile audience has really grown. In 2009, you could sell almost any boring app for one dollar. You can't do that anymore; consumers now expect more complexity. But that means it's a little expensive to develop for many platforms if you don't have a unified game engine. Unity is the key ingredient for us."

Outfit7 wanted a lightweight game engine that supported 3D, but the game scenes are not very complex. That ruled out Unreal, which supports highly complex first-person shooters. "We did not, and still don't, need such a complex game engine as Unreal. We just need basic support and a good architecture to push our ideas forward." In 2013, they released My Talking Tom based on the Unity port.

Investing further in a proprietary game engine was going to be expensive, Štamcar says. And having access to Windows 10 development options could be a game-changer. "Unity is a standard now, and all the startups will look at it, and have experience with it. And now we can have a single Unity app running on Windows 10 phones, tablets, and desktops."

Windows 10 and Unity 5.2 Could Explode

Support for Windows 10 Universal Windows Platform (UWP) Apps in Unity 5.2 has huge potential. Unity 5.2 will export Visual Studio* 2015 solution files, which can then build and run on Windows 10 PCs, plus Windows 10 phones and tablets. Three architectures are supported: ARM, x86 and x64. In addition, developers can use .NET Core 5.0 in their projects.

In order to build Unity games for Universal Windows 10 applications (UWP), developers need the following:

  • Unity 5.2 or later
  • A Windows 10 machine
  • Visual Studio 2015 RTM, (the minimum version is 14.0.23107.0). Earlier versions, for example Visual Studio RC, are not supported in Unity 5.2.
  • You’ll also need to install the Windows 10 SDK.

With UWP support in Unity 5.2, developers can build a single game for Windows 10 that targets multiple devices ranging from phones to tablets to PCs to Xbox*. This means that multiple screen-sizes and resolutions, different device capabilities, and a wide range of aspect ratios are all supported within a single game.

Starting with Unity 5.2, Visual Studio becomes the new default Unity scripting editor on Windows. The Unity installer installs the free Visual Studio Community 2015 and the Visual Studio 2015 Tools for Unity. Unity will automatically pick up VSTU where it is installed. Scripts will open directly into Visual Studio, where developers can write and debug their Unity game.

Outfit7: A Creative Path To Success

Since its inception in 2010, and throughout its tremendous growth since, the brains behind Outfit7 have made a few interesting decisions, taking paths that other developers chose not to travel. Consider the company’s rejection of outside funding during its early stages. When the Outfit7 team reached the 150-million-download mark, they realized they had the financial strength and monthly income to take the reins themselves, and avoid the distraction of answering to bankers and venture capitalists. Far from having to compromise on release dates, quality levels, and character choices, they blazed their own trail.

When fledgling teams like Outfit7 are fortunate enough to tap into their own funds, they are also more rational on how those dollars are spent. They tend to eschew the traditional rush to an initial public offering (IPO), made in order to stock up on cash and pay off early investors and co-founders. This was the case with Outfit7, as Login has explained several times, notably to Bloomberg Television during an interview touching on the company’s initial wave of success.

In another diversion from the typical gaming app developer journey, Outfit7 saw themselves evolving into something bigger and better, not tied exclusively to gaming. In fact, from the outset Outfit7 never really set out to make games. The objective was to entertain and empower the inner kid in everyone. And that has most definitely been achieved.

Rather than seeing itself as just another app developer, Outfit7 has molded itself into an "omnimedia" company; creating and shaping its digital app creations as true intellectual properties. When Talking Tom and Friends: The Animated Series premiered on YouTube, Login chatted with Entrepreneur* about the company’s meteoric rise, and how fans emotionally connect with its characters. For instance, the in-app features encourage users to share their own creations on social media, developing a humorous community that transcends nationality, gender and age. The brand rapidly gained more fans as people used Outfit7 characters to communicate with one another.

The founders have learned a lot on the journey from the company’s infancy to the position they have today as a true global entertainment company. That puts them in a position to give useful advice to beginning teams. Login’s advice to aspiring mobile app developers is simple: "Gameplay needs to be your number one priority."

"If it is not fun, then users won’t open it a second time," he continued. "You not only need to continually grow your user base, but you also need to monetize it. And never rely solely on in-app purchases as your only revenue source. Developers should always explore other monetary options, and always keep the total experience in mind."

More advice: spend a lot of time understanding what your application is all about. Who are you designing the experience for? It's not a computer or a browser–it's a mobile device that fits into your pocket. Login believes lots of developers make the mistake of taking an existing concept and adapting it into mobile, never quite meeting the audience’s expectations.

"The mobile market allows you to be anything you want to be," he explained. "The best performing apps are in categories all across the board. Utilitarian, educational or fun; you should be focused and be able to engage your audience."

The future for Outfit7 will almost certainly diverge completely from the normal cycle of developers continually tinkering around the edges of their first great idea. Since app users really care about the Talking Tom and Friends characters, they interact constantly with Outfit7’s creations and get very attached. The open-ended nature of the apps precludes following a traditional gaming path, taking in bigger, badder explosions, harsher villains, heavier weapons, larger levels, and so on. Instead, Outfit7 continues to expand the Talking Tom and Friends franchise with new characters and new ways to interact with them, including movies and videos.

Wherever their path leads, Outfit7 will surely continue to break records and make news. Setting a new standard for developers to follow definitely makes this an outfit worth watching.

Additional Resources

Outfit 7: http://www.outfit7.com

My Talking Tom at the Google Play store: https://play.google.com/store/apps/details?id=com.outfit7.mytalkingtomfree

Talking Tom and Friends on YouTube: https://www.youtube.com/watch?v=MIVtb1UkTis

Unity: https://unity3d.com

Windows 10 SDK: https://dev.windows.com/en-us/downloads/windows-10-sdk

Top Reasons Why You Should Invest in Mobile App Development

$
0
0

Daniel Kaufman
Co-Founder at Brooklyn Labs

With mobile apps made for mobile OS from Android, Apple and others, you can make brand awareness and reliability between the vast number of present and potential customers. Several customers now expect a business or brand to have its own reliable mobile app. This indicates that it is not only becoming a need to get a reasonable edge over other businesses. Having a dedicated mobile app enhances to the reliability of the brand.

Bear in mind the significance of that mobile application holds nowadays in society; it is only intelligent to make one for your business. Here is some reason why you should invest in mobile app development.

1. Mobile Apps Deliver On-The-Go Advertising

With mobile apps your present customers can contact your business from any place and at any time in a customer friendly environment. Regular use of your app will emphasize your brand or business. This means that when they want to buy something, probabilities are they will come to you. You have formed a relationship with them using the app and this is equal to placing your business in your users’ pockets.

2. The World has gone Mobile

There is no fraction that the globe has gone mobile and there is no turning back. Customers are using their phones to find local commerce. Your online branding attempts are being viewed by mobile networks. Therefore, just having a site is not sufficient anymore. Consumers are turning away from the desktop browsers and depend on mobile apps. Unlike outdated websites which overwhelm your six inch mobile displays, apps succeed as a spontaneous purchasing and browsing substitute.

3. Apps Increase Interest

When you develop an app, it delivers you a simple way to show your products or services to your present and future customers. Every time they need to buy something, they can simply use it as a one-stop point to obtain all the info they want.

4. A Larger, Fresher Viewers

Most young persons went mobile a long time before. Nearly 75 percent of the millennial age group will have smartphones by the end of the year. It is tough to involve the youth age group using old-fashioned techniques. Young person’s select to depend on their mobile devices, even though they might have access to an outdated personal computer. Smartphones have become the novel tool for talking with families and friends, browsing and buying goods and services online. To reach these viewers, you want to have a mobile app.

5. It Can Be a Social Platform

It nearly goes without talk about that persons are passionate with social media. So you will need to be a part of their things well. Adding social features such as likes, comments,in-app messaging and thus onward in your app can aid your business increase its social standing. People spend more time on social media, particularly Facebook and Twitter. Thus, by having a mobile app that provides them whole features they get into social media means that they’ll spend more and more time in your mobile app.

Easy Android* Maps Part 1

$
0
0

Hafizh Herdi
Founder at TWOH's Engineering

Nowadays with more and more Android* apps using a location based and mapping technology, it’s become a must have feature in your Android app. For example, to display nearest gas stations and then draw a route from your location to the gas station, it will become much easier to user if we display it on a map. Fortunately, with Google Maps Android* API and Android* Studio, we can easily create a map-based feature and then embed it to our app. In this post we try to display a simple map in our application using Google Maps Android* SDK and Android Studio.

Create Google Maps Android* API Key

First, to use the Google Maps Android API SDK, we need an API key. That key can be obtained free from the Google Developer Console. You can click this link, and the choose > Create a new Project. Then a new page will open which enable you to create credentials, in the pop up choose API key like example below:

Once done, you will be prompted to enter a name in your Android API key, like below example:

You can also specify package name and fingerprint so the API key only can be used on your own specific app. But for this tutorial we will use an universal key. Once you’re done with the API key name, click >Create button. And voila ! here’s your Android API Key:

You can copy that and save it to a note because we will use the key later.

Creating Map-based application using Android Studio

Android Studio is arguably the most favorite IDE for Android developers, so I am sure most of you are already familiar with it. We will be using Android Studio to create our first map app.

Firstly, open Android Studio, and from the top menu bar choose > File> New> New Project and then choose the Google Maps Activity template.

After that, click > Next and a new page will open which will let you enter your Maps Activity name, Layout name and the Activity Title (we will leave that to default). The next page is where you can fill your app name and package name. If you’re done, click Finish and you will go to your application page.

We will use our previously generated API key for this app On your application project page, open the _google_mapsapi.xml in the /res/values folder. Replace the value YOUR_KEY_HERE with your actual API key like example below.

<resources><string name="google_maps_key" templateMergeStrategy="preserve" translatable="false">AIza************0c</string></resources>

After that, lets open the MapsActivity.java file where it contains all the logic for creating our map app. The code inside the file will look like below:

package id.web.twoh.mymapapp;

import android.support.v4.app.FragmentActivity;
import android.os.Bundle;

import com.google.android.gms.maps.CameraUpdateFactory;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.OnMapReadyCallback;
import com.google.android.gms.maps.SupportMapFragment;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;

public class MapsActivity extends FragmentActivity implements OnMapReadyCallback {

    private GoogleMap mMap;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_maps);
        // Obtain the SupportMapFragment and get notified when the map is ready to be used.
        SupportMapFragment mapFragment = (SupportMapFragment) getSupportFragmentManager()
                .findFragmentById(R.id.map);
        mapFragment.getMapAsync(this);
    }


    /**
     * Manipulates the map once available.
     * This callback is triggered when the map is ready to be used.
     * This is where we can add markers or lines, add listeners or move the camera. In this case,
     * we just add a marker near Sydney, Australia.
     * If Google Play services is not installed on the device, the user will be prompted to install
     * it inside the SupportMapFragment. This method will only be triggered once the user has
     * installed Google Play services and returned to the app.
     */
    @Override
    public void onMapReady(GoogleMap googleMap) {
        mMap = googleMap;

        // Add a marker in Sydney and move the camera
        LatLng sydney = new LatLng(-34, 151);
        mMap.addMarker(new MarkerOptions().position(sydney).title("Marker in Sydney"));
        mMap.moveCamera(CameraUpdateFactory.newLatLng(sydney));
    }
}

That’s a quite simple code, but let’s review that code together. First, it will create an instance of GoogleMap which represents the map in our app. Then it will create a new Marker to be placed on top of the map. And the marker uses the location from LatLng variable, which is given its coordinate (-34, 151). Finally moveCamera() method will be called so when we first open the app it will move the map to a specific coordinate, which in this example will bring us to Sydney.

Okay, maybe you will ask, “What if I want the map pointing to my location?” No worries, we already have a method to handle that. For that purpose, let’s modify the code a little bit! Just add this line:

mMap.setMyLocationEnabled(true);

Right below the moveCamera() method. It will enable the map to display a little “my location” button on top right of the map screen. Then when we click that button, the map will animate and move directly to our location. But remember to use this feature you must enable Location Services in your Android device.

Let’s run our app. And it will open a nice map like below, looks how easy it is!

This is just first step for Android maps; in the next tutorial we will learn more about customizing the maps. For more details you can check, Google Maps API Android documentation. You also can find the sample source code at my GitHub*.

Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system was "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our Cordova CLI 6.2.0 build system is "pinned" to: 

  • cordova-android@5.1.1
  • cordova-ios@4.1.1
  • cordova-windows@4.3.2

Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

  • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

  • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

  • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

  • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

  • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

 

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

iOS build error?

If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

Please follow these steps to generate the P12 file.

  1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
  2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
  3. Upload .csr on Apple Developer Portal
  4. Generate certificate on Apple developer portal
  5. Download .cer file from the Developer portal
  6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
  7. Create an appID on Apple Developer Portal
  8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
  9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

Few things to check before you build:  

  1.  Make sure your certificate has not expired
  2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
  3. You are using  provisioning profile that is associated with the certificate you are using to build the app
  4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

  • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

Why you should be using the Plugin Management Tool:

  • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

  • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

  • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

    When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

  • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

  • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

Removing a plugin from your project generates the following error:

Sometimes you may see this error:

This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

The simplest fix is to:

  • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
  • exit the Intel XDK
  • delete the entire plugins directory inside your project
  • restart the Intel XDK

The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

Why do I get a "build failed: the plugin contains gradle scripts" error message?

You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

The error message in your build summary log will look like the following:

In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

<framework src="some.gradle" custom="true" type="gradleReference" />

it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

import java.util.regex.Pattern

def doExtractStringFromManifest(name) {
    def manifestFile = file(android.sourceSets.main.manifest.srcFile)
    def pattern = Pattern.compile(name + "=\"(.*?)\"")
    def matcher = pattern.matcher(manifestFile.getText())
    matcher.find()
    return matcher.group(1)
}

android {
    sourceSets {
        main {
            manifest.srcFile 'AndroidManifest.xml'
        }
    }

    defaultConfig {
        applicationId = doExtractStringFromManifest("package")
    }
}

All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

How does this help you and what do you do?

To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

  • Download a ZIP of the plugin version you want to use (e.g. version 1.6.3) from that plugin's git repo.

    IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and careful when choosing what you clone from a git repo! 

  • Unzip that plugin onto your local hard drive.

  • Remove the <framework> line that references the gradle script from the plugin.xml file.

  • Add the modified plugin into your project as a "local" plugin (see the image below).

In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

Back to FAQs Main

Case Study: The “Smartphone as Next-Gen Automotive Infotainment” Concept

$
0
0

Overview

Apple and Google are developing connectivity between smartphones and automotive infotainment systems. This connectivity uses the existing ecosystem of smartphones with the power of the car Human-Machine-Interface (HMI) system. Those who use their standalone smartphone without integrating into the car’s existing systems suffer from many difficult and thus dangerous operations. To address these problems, we introduce the “Smartphone as Next-Gen Automotive Infotainment" concept. This concept consists of several apps that are optimized for Intel® processor-based Android* devices, or incorporated with the Intel® Context Sensing SDK. In this article, we introduce several development examples such as 64-bit Android optimization, or adaptation of the Intel® Context Sensing SDK, and then explain how to integrate those into the automotive infotainment ecosystem.

ZENRIN Its-mo NAVI [DRIVE]Navigation Software – ZENRIN Datacom “ZENRIN Its-mo NAVI [DRIVE]”

Navigation software is a core component of the automotive infotainment system. We collaborated with ZENRIN Datacom, a long-established geolocation service company. They developed “ZENRIN Its-mo NAVI [DRIVE]” for Android.

With navigation software, showing the focal point means offering not only street layout and buildings, but also precise 3D terrain rendering. Thanks to full 3D architecture, users can enjoy driving with intuitive visual maps.

ZENRIN 3D architectureDevelopment Backgrounds

ZENRIN Datacom is working hard to expand the automobile navigation market. Intel has worked closely with ZENRIN Datacom since Intel started working on Android optimization. For example, ZENRIN Datacom released first x86 native-supported navigation software for Android in 2012.

Focused on x86 64-bit Support

ZENRIN Datacom first became interested in 64-bit architecture because its application is graphically intensive, like mainstream gaming solutions, so wider data transfer could potentially improve performance. Due to its policy of “supporting various hardware and OS platforms aggressively,” ZENRIN Datacom focuses on code portability from the early stages of development. So supporting a 64-bit effort was a matter of simply incorporating basic methods such as working with a 64-bit register. Additionally, because of 64-bit support, the company’s application could reduce CPU load by about 20 percent1 compared to the 32-bit version. This improvement is aided by double-digit data bandwidth architecture.

1 Configuration: ASUS ZenFone* 2 ZE551ML: Intel® Atom™ processor Z3580, PowerVR G6430* graphics, 2 GB RAM, 1080x1920 resolution, CPU load measured by Intel® Graphics Performance Analyzers, Workload: Itsmo Navi [DRIVE] v2.7 32-bit/64-bit version, navigation demo mode.

netpeople* by iNAGOVoice Assistant Software – mia powered by netpeople* by iNAGO

A vital part of a modern car infotainment system is a voice-activated system where the user can operate infotainment with voice without touching any buttons or the screen. In this category, iNAGO is one of the major “voice assistant” service providers and has a large number of adaptation achievements in car navigation systems. They also participated in developing this automotive infotainment concept.

iNAGO offers mia for AndroidDevelopment Backgrounds

Voice interaction is one of the most suitable methods for drivers to use connected services since it does not disturb driving. iNAGO offers mia for Android to showcase its core conversational assistant technology. This app provides voice-based services such as restaurant search, local information search, weather forecasts, scheduling, mail, music, and more. iNAGO strategically collaborates with the car infotainment system to expand its usage model.   

Focused on Context-Oriented Development

This app originally had a Drive mode which used a simple UI with optimized information and graphics to minimize driver distraction, but it required manual operation to change the modes. For achieving full-automated operations, iNAGO developed an external API that can be manipulated from the Intel Context Sensing SDK (described later). This can be done by just simply providing Android standard “intent” action like the following:

package name: jp.co.inago.netpeoplea
Class name: jp.co.inago.netpeoplea.NPMainActivity
name: INPUT
value: DRIVE

Thanks to this function, the user can operate optimized UIs according to the situation – when you operate it while you are not driving the normal UI is displayed. When you drive, the UI changes to DRIVE mode automatically. Also, it is vital that the user can designate a destination searched by iNAGO’s mia to ZENRIN Datacom’s navigation software. To achieve this functionality, both companies cooperated to develop APIs compatible with each other. Now these two apps operate seamlessly, as if they are a single, integrated app.

SmarterApps’ AutomateItContext Aware Software – SmarterApps’ AutomateIt

SmarterApps’ app in some ways is the core software for delivering next-gen automotive infotainment. Yes, most of the built-in or mobile-device-based infotainment systems have some level of concern about what should be done in car, but they ignore “car arriving” or “car leaving” usage. To fulfill this part, we prepared the Intel Context Sensing SDK and also asked SmarterApps to deliver a full automated user experience.

Development Background

SmarterApps’ product AutomateIt* is a mobile app for Android. Users can configure rules that make the device automatically respond to certain events. This saves time and more importantly doesn’t require the driver’s attention. Actions include switching the phone to silent at night or during calendar meetings, launching the navigation app when the user starts driving, and more. In addition, SmarterApps was already working with Intel to deliver an x86 native version app, so it was a good opportunity to strengthen the relationship.

Intel® Context Sensing SDK

Intel Context Sensing SDK is a library available for Android and Windows* that helps you easily incorporate context-aware capabilities and services. This SDK includes Context APIs, which can be used to create context-aware apps by taking advantage of many built-in context type providers. In addition, when you run Intel Context Sensing SDK-powered apps on Intel® processor-based devices that have a “Sensor Hub,” the SDK automatically utilizes its function to maximize battery life.

AutomateItIntel Context Sensing SDK Adaptation and Car Mode Support

AutomateIt is built on sensing the user’s context and reacting to commands within that context, which makes the Intel Context Sensing SDK a perfect fit for the app’s core functionality. It provides a single API that wraps numerous system APIs, with the benefit of optimizing battery use on Intel processor-based devices. It also provides functions such as Activity Recognition that otherwise require using Google Play* Services library, which is not a part of Android and only exists on devices using the Google Play Store. Adapting the Intel Context Sensing SDK for the app could be done by simply adding Intel Context Sensing SDK libraries to the APK, then replacing relative APIs to those which the Intel Context Sensing SDK provided.

As part of a next-gen car infotainment system, AutomateIt added a feature that allows toggling the device to “Car Mode.” This feature lets you build rules that use the Intel Context Sensing SDK Activity Recognition feature as a trigger to identify that the user is driving and then pairs it with the new Action to activate “Car mode.”

Neura*Context Aware Software – Neura*

We should focus on not only the comprehensive user experience as it relates to the car, but also to the home or office locations. But how can we detect arriving or leaving from a home or office? We think Neura is one of best solutions for this usage. Neura is an AI-based awareness development platform, designed to profile, recognize, and predict the user of a mobile device. For example, after several learning periods, Neura can detect your arrival or departure automatically, and then do required actions automatically by itself or in connection with AutomateIt.

Neura Development Background

As a company whose mission is to create the most comprehensive digital picture of the user’s life, Neura welcomes any advancements in hardware technology that make data collection from sensors easier and more efficient. Neura believes that the future of handheld computing lies in hardware-enabled sensing solutions that are service-agnostic, such as Sensor Hub*. In addition, the Neura and Intel teams have the same philosophy of delivering next-gen user experience in cars, so working together was a natural fit.

x86 native support and Intel Context Sensing SDK Adaptation

Neura has developed x86 native architecture support, which includes just adding ”APP ABI :=x86” to the Application.mk. Neura has also optimized its sensor analysis algorithm for the test device provided by Intel, as it was the first time the system-on-chip  sensor subsets were used. Because part of the test included an implementation of Intel Context Sensing SDK support, Neura has added it as well, adjusting the Machine Learning (ML) algorithms so that the utilization of the SDK became possible. The efforts to optimize Neura ML for the Intel Context Sensing SDK, which isn’t a regular part of Neura’s ML, were the biggest undertaking of the collaboration between Neura and Intel on this project.  

Integrating the Four Apps into One System

Now we need to integrate the apps into one system—in this case, the Android device. We chose ASUS ZenFone 2 ZE551ML as a test device, and then installed the above apps as well as several related apps such as “car dashboard” app, which can correspond with “car mode” of Android (you can find several apps with this keyword in Google Play). In addition, we prepared specific rules for AutomateIt for arriving and leaving in a car. We used custom sensors and power plug conditions for triggers. Usually sensor monitoring sacrifices battery life because it prevent the CPU from deep sleeping, but thanks to the Intel Context Sensing SDK, this solution offloads sensor monitoring transaction to the Sensor Hub, then minimizes battery life impact. In the case of an arriving car, just attach the phone to the holder and plug to the power source; the phone immediately changes to car mode. Similarly, when you get out of the car, just unplug the power source and go, and the phone immediately goes back to normal mode. You can download the rules from AutomateIt’s rules market. Use “car infotainment” in the keyword search.

Conclusion

Working with these groups, we delivered a complete next-gen automotive infotainment system. We verified this system and confirmed its usability, and we will continue working with related vendors for further improvements. We can develop modern automotive infotainment system combining several techniques such as 64-bit Android optimization, or adaptation of the Intel Context Sensing SDK. Although this article focused on an Android target, all 4 companies always conscious about cross-platform or cross-architecture solutions, and of course Intel Context Sensing SDK also supports cross-platform solutions.

Related Vendors and Their Applications

ZENRIN DataCom provides various applications and services related to location. Its goal is to provide "reliable information centered around individual users with concern for the finer details" surpassing current information technology to provide a "service which provides the information necessary for users to take action. ”ZENRIN Its-mo NAVI [DRIVE]” is available at Google Play (Japan only).

iNAGO has led the way in Human-Computer Interaction for over a decade. Based in Tokyo and Toronto, it has worked with a variety of major companies to provide conversational personal assistants for any device, in fields ranging from mobile to finance. Their netpeople platform powers intelligent assistants that enable drivers to stay connected while keeping them safe. mia is available at Google Play (Japan only). An English version for North America is available upon request.

SmarterApps is an Israeli company developing mobile apps designed to make your smartphone smarter. The company’s main product is AutomateIt. AutomateIt is available at the Google Play.

Neura was founded in 2012 and is a leading provider of AI-based awareness development platforms. Constantly learning and adapting, Neura produces aggregated profiles of the user’s physical activity, real-time reactions to important moments in the user’s life and predictions regarding the user’s future actions. Neura is available at Google Play.

About the author

Sakemoto is Intel K.K. SSG (Software & Services Group), application engineer. He is responsible for software enabling and also he has been working with application vendors. Prior to his current job, he has been a software engineer for various mobile devices including embedded Linux* and Windows.

Viewing all 183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>