°æ¿µ°æÁ¦´ëÇÐ ÀÀ¿ëÅë°èÇаú

  • Åë°èÇаú
  • ´ëÇпø
  • »ç¶÷µé
  • Çлýȸ
  • Ä¿¹Â´ÏƼ

Ä¿¹Â´ÏƼ

  • °øÁö»çÇ×°øÁö»çÇ×
  • ÀÚÀ¯°Ô½ÃÆÇÀÚÀ¯°Ô½ÃÆÇ
  • »çÁø°¶·¯¸®»çÁø°¶·¯¸®
  • ÇаúÀÚ·á½ÇÇаúÀÚ·á½Ç
  • µ¿¹®°Ô½ÃÆǵ¿¹®°Ô½ÃÆÇ
  • ±³È¯Çлý±³È¯Çлý

HOME | Ä¿¹Â´ÏƼ | °øÁö»çÇ×

°øÁö»çÇ×

Á¦¸ñ [°øÁö»çÇ×]2012³â 4¿ù 16ÀÏ Åë°èÇаú¿Í µ¥ÀÌÅÍ°úÇבּ¸¼Ò ÁÖ°ü Åë°èÇмú ¼¼¹Ì³ª¸¦ °³ÃÖÇÕ´Ï´Ù.
ÀÛ¼ºÀÚ ÀÌÁ¾Àº ³¯Â¥ 12.04.09 Á¶È¸ 12908

Åë°èÇаú ¼¼¹Ì³ª °ø°í


¢Ã ¼¼¹Ì³ª 1

¡î ÀÏ  ½Ã :
2012³â 4¿ù 16ÀÏ (¿ù) ¿ÀÈÄ 4:00 ~ 4:50
¡î ¹ßÇ¥ÀÚ :
±è¸í¹Î ±³¼ö (University at Buffalo, Dept. of Biostatistics)
¡î Àå  ¼Ò :
Áß¾Ó´ëÇб³ ¹ýÇаü ÁöÇÏ 1Ãþ Á¤º¸È­ ¼¾ÅÍ 1½Ç
¡î ÁÖ  Á¦ :
A Progressive Block Empirical Likelihood Method for Time Series


¢Ã ¼¼¹Ì³ª 2

¡î ÀÏ  ½Ã :
2012³â 4¿ù 16ÀÏ (¿ù) ¿ÀÈÄ 5:00 ~ 5:50
¡î ¹ßÇ¥ÀÚ :
ÃÖÈ£½Ä ±³¼ö(È£¼­´ëÇб³ Á¤º¸Åë°èÇаú)
¡î Àå  ¼Ò :
Áß¾Ó´ëÇб³ ¹ýÇаü ÁöÇÏ 1Ãþ Á¤º¸È­ ¼¾ÅÍ 1½Ç
¡î ÁÖ  Á¦ :
Some computational algorithms in sparse supervised learning



Abstract


[¼¼¹Ì³ª 1]

This paper develops a new blockwise empirical likelihood (BEL) method for stationary, weakly dependent time processes, called the progressive block empirical likelihood (PBEL). In contrast to the standard version of BEL, which uses data blocks of constant length for a given sample size and whose performance can depend crucially on the block length selection, this new approach involves data blocking scheme where blocks increase in length by an arithmetic progression. Consequently, no block length selections are required for the PBEL method, which implies a certain type of robustness for this version of BEL. For inference of smooth functions of the process mean, theoretical results establish the chi-square limit of the log-likelihood ratio based on PBEL, which can be used to calibrate confidence regions. Simulation evidence indicates that the method can perform comparably to the standard BEL in coverage accuracy(when the latter uses a ¡°good" block choice) and can exhibit more stability, all without the need to select a block length.

Keywords: Arithmetic progression; Block bootstrap; Stationarity; Weak Dependence

[¼¼¹Ì³ª 2]

Variable selection is a fundamental task for high-dimensional statistical supervised learning problems. Traditional approaches follow stepwise and subset selection procedures, which are computationally intensive, unstable, and difficult to draw sampling properties. Alternative variable selection methods are sparse penalized approaches, including bridge regression (Frank and Friedman 1993), least absolute shrinkage and selection operator (LASSO; Tibshirani, 1996), the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the minimax concave penalty(Zhang, 2010). In high-dimensional learning via penalized approaches, a regularization requires algorithm to have lighter computational complexity. So, implementations in practice must be achieved efficiently. For the purpose, an entire solution path-following algorithm is sufficiently fast and stable to analyze high-dimensional data. In this talk, some algorithms for sparse supervised learning problems including regression, classification, quantile regression and inverse covariance estimation are considered. First, I will present the Lars (Least angle regression) algorithm (Efron, 2003) briefly and then follow-up algorithms for various supervised learning problems. I will also show some recent works extending to non-convex optimization problems.





Department of Statistics &
The Research Center for Data Science
´ñ±Û(0)
ÀÔ·Â